Sothis Medias
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications

GAMES OF DICE AND GAMES OF GLASS

8/31/2020

0 Comments

 
“There is more in man and in music than in mathematics, but music includes all that is in mathematics.”—Peter Hoffman
Picture
​Infotainment is usually thought of as light entertainment peppered with superficial “facts” and forgettable news. Yet another kind of infotainment exists, a musical kind that is based on mathematical algorithms. It is true entertainment that is filled with true information and though it is mathematically modeled none of it is fake.

In the twentieth century interest in the multidisciplinary fields of Information Theory and Cybernetics led to dizzy bursts of creativity when their ideas were applied to making new music. These disciplines applied rigorous math to the study of communication systems and how a signal transmitted from one person can cut through the noise of other spurious signals to be received by another person. They also made explicit the role of feedback inside of a system, how signals can amplify themselves and trigger new signals. All of this was studied complex equations and formulas.

Yet there is nothing new about the relationship between music and math.    

Algorithmic music has been made for centuries. It can be traced all the way back to Pythagoras, who thought of music and math as inseparable. If music can be formalized in terms of numbers, music can also be formalized as information or data.  The “data” the ancients used to drive their compositions was the movement of the stars. Ptolemy is known to us most for his geocentric view of the cosmos and the ordered spheres the celestial bodies traveled on. Besides being an astronomer Ptolemy was also a systematic musical theorist. He believed that math was the basis for musical intervals and he saw those same intervals at play in the spacing of heavenly bodies, each planet and body corresponding to a certain modes and notes.

Ptolemy was just one of many who believed in the reality of the music of the spheres. Out of these ancient Greek investigations into the nature of music and the cosmos came the first musical systems. The musician who used them was thus a mediator between the cosmic forces of the heavens above and the life of humanity here below. 

Picture
Western music went through myriad changes across the intervening centuries after Ptolemy. World powers rose and fell, new religions came into being. Out of the mystical monophonic plainchant uttered by Christian monks in candlelit monasteries polyphony arose, and it called for new rules and laws to govern how the multiple voices were to sing together. This was called “canonic” composition. A composer in this era (15th century) would write a line for a single voice. The canonic rule gave the additional singers and voices the necessary instruction. For instance one rule would be to for a second voice to start singing the melody begun by one voice again after a set amount of time. Other rules would denote inversions, retrograde movement, or other practices as applied to the music. 

From this basis the rules, voices, and number of instruments were enlarged through the renaissance until the time of the era of “Common Practice”, roughly between 1650 to 1900. This period encompassed baroque music, and the classical, romantic and impressionist movements. The 20th and 21st century are now giving birth to what Alvin Curran has called the New Common Practice.

In the Common Practice Era tonal harmony and counterpoint reigned supreme, and a suite of rhythmic and durational patterns gave form to the music. These were the “algorithmic” sand boxes composers could play in.

The New Common Practice, according to Curran encompasses, “the direct unmediated embracing of sound, all and any sound, as well as the connecting links between sounds, regardless of their origins, histories or specific meanings; by extension, it is the self guided compositional structuring of any number of sound objects of whatever kind sequentially and/or simultaneously in time and in space with any available means.” I’ve begun to think of this New Common Practice as embracing the entire gamut of 20th and 21st century musical practices:  serialism, atonality, musique concrete, electronics, solo and collective improvisation, text pieces, and the rest of it.

One vital facet of the New Common Practice is chance operations, or the use of randomizing procedures to create compositions. Chance operations have a direct relation to information theory, but this approach can already be seen making cultural inroads in the 18th century when games of chance had a brief period of popularity among composers and the musical and mathematically literate. These are a direct precursor to the deeper algorithmic musical investigations that have started to flourish in the 20th century.   
Picture
Much of this original algorithmic music work was done the old school way, with pencil, sheets of paper, and tables of numbers. This was the way composers plotted voice-leading in Western counterpoint. Chance operations have also been used as one way of making algorithmic music, such as the Musikalisches Würfelspiel or musical dice game, a system that used dice to randomly generate music from tables of pre-composed options. These games were quite popular throughout Western Europe in the 18th century and a number of different versions were devised. Some didn’t use dice but just worked on the basis of choosing random numbers.

 In his paper on the subject Stephen Hedges wrote how the middle class in Western Europe were at the time enamored with mathematics, a pursuit as much at home in the parlors of the people as in the classroom of professors. "In this atmosphere of investigation and cataloguing, a systematic device that would seem to make it possible for anyone to write music was practically guaranteed popularity.”

​The earliest known example was created by Johann Philipp Kirnberger with his "The Ever-Ready Minuet and Polonaise Composer" in 1757.   C. P. E. Bach's came out with his musical dice game "A method for making six bars of double counterpoint at the octave without knowing the rules" five years later in 1758.  In 1780 Maximilian Stadler published "A table for composing minuets and trios to infinity, by playing with two dice". Mozart was even thought to have gotten in on the dice game in 1792 when an unattributed version made an appearance from his music publisher a year after the composer’s death. This has not been authenticated to be by the maestro’s hand, but as with all games of possibility, there is a chance.
These games may have been one of the many inspirations behind The Glass Bead Game by Herman Hesse. This novel was one of the primary literary inspirations and touchstones for the young Karlheinz Stockhausen. The Glass Bead Game portrays a far future culture devoted to a mystical understanding of music. It was at the center of the culture of the Castalia, that fictional province or state devoted to the pursuit of pure knowledge.

As Robin Maconie put it the Glass Bead Game itself appears to be “an elusive amalgam of plainchant, rosary, abacus, staff notation, medieval disputation, astronomy, chess, and a vague premonition of computer machine code… In terms suggesting more than a passing acquaintance with Alan Turing’s 1936 paper ‘On Computable Numbers’, the author described a game played in England and Germany, invented at the Musical Academy of Cologne, representing the quintessence of intellectuality and art, and also known as ‘Magic Theater’.”

Hesse wrote his book between 1931 and 1943. The interdisciplinary game at the heart of the book prefigures Claude Shannon’s explosive Information Theory which was established in his 1948 paper A Mathematical Theory of Communication. His paper in turn bears a debt to Alan Turing, whom Shannon met in 1942. Norbert Wiener also published his work on Cybernetics the same year as Shannon. All of these ideas were bubbling up together out of the minds of the leading intellectuals of the day. Ideas about computable numbers, the transmission of information, communication, and thinking in systems, all of which would give artists practical tools for connecting one field to another as Hesse showed was possible in the fictional world of Castalia.
​
Robin Maconie again had the insight to see the connection between the way Alan Turing visualized “a universal computing machine as an endless tape on which calculations were expressed as a sequence of filled or vacant spaces, not unlike beads on a string”.
As the Common Practice era of western music came to an end at the close of the 19th century, the mathematically inclined serialism came into its own, and as the decades wore on games of chance made a resurgence, defining much of the music of the 20th century. With the advent of computers the paper and pencil method have taken a temporary backseat in favor of methods that introduce programmed chance operations.
Composers like John Cage took to the I Ching with as much tenacity as the character Elder Brother did in Hesse’s book. Karlheinz Stockhausen meanwhile used his music as means to make connections between myriad subjects and to create his own unique ‘Magic Theater’. Cybernetics and Information Theory each contributed to thinking of these and other composers. 
Picture
REFERENCES:
Dice Music in the Eighteenth Century, pp. 184–185, Music and Letters 59: 180–87. 

Conceptualizing music: cognitive structure, theory and analysis, by Lawrence M. Zbikowski, Oxford, 2002

The New Common Practice by Alvin Curran
http://www.alvincurran.com/writings/common.html

Other planets: the complete works of Karlheinz Stockhausen 1950–2007, Rowman & Littlefield Publishers​, 2016

Note:

A set of musicians dice have been made that offer up numerous possibilities for the practicing musician. Using random process doesn't just have to be for avant-garde composers anymore!

Musicians Dice: 
"The Musician’s Dice are patented, glossy black 12-sided dice, engraved in silver with the chromatic scale. They can be used in any number of ways – they bring the element of chance into the musical process. They're great for composing Aleatory and 12 tone-music, and as a basis for improvisation – they’re really fun in a jam session. They also make an effective study tool: they can be used as “musical flash cards” when learning harmony, and their randomness makes for fresh and challenging exercise in sight-singing and ear training. Plus, they look really cool on the coffee table, and give you a chance to throw around words like "aleatory.""

Below two musicians play around with using these dice.
Read the rest of the Radiophonic Laboratory series.
0 Comments

The Bell Sound 3: Grooving with Laurie Spiegel

7/29/2020

0 Comments

 
Picture
 One of the key researchers and musicians exploring the new frontiers of science and music at Bell Labs was Laurie Spiegel. She was already an accomplished musician when she started working with interactive compositional software on the computers at Bell between the years at the age of twenty-eight. The year was 1973.

Laurie brought her restless curiosity and ceaseless inquiry with her to Bell Labs. She was the kind of person who could see the creative potential in the new tools the facility was creating and make something timeless. Her skill and ability in doing so was something she had prepared herself for through a scholars devotion to musical practice and study.   

She was interested in the stringed instruments, the ones you strums and pluck. She picked up guitar, banjo, and mandolin for starters and learned to play these all by ear in her teens. She excelled in High School and was able to graduate early and get a jump start on a more refined education. Shimer College had an early entrance program and she made the cut. With Shimer as a launching board she got into their study abroad program and left her native Chicago to join the scholars at Oxford University. While pursuing her degree in Social Sciences she decided she better teach herself Western music notation. It was essential if she was to start writing down her own compositions. She managed to stay on at Oxford for an additional year after her undergraduate was completed. In between classes she would commute to London for lessons with composer and guitarist John W. Durante who fleshed out her musical theory and composition.

​She was no slacker.

Her devotion to music continued to flourish when she came back to the states. In New York she worked briefly on documentary films in the field of social science, but the drive to compose music pushed her back onto the path of continuing education. So she headed back to school again, at Juilliard, going for a Masters in Composition. Hall Overton, Emmanuel Ghent and Vincent Perischetti were some of her teachers between 1969 and 1972. Jacob Druckman was another and she ended up becoming his assistant and ended following him to Brooklyn College. While there she also managed to find some time to research early American music under H. Wiley Hitchock before completing her MA in 1975.

​Laurie was no stranger to work, and to making the necessary sacrifices so she could achieve her aims and full artistic potential. Laurie’s thinking is multidimensional, and her art multidisciplinary. Working with moving images was a natural extension of her musicality. She supported herself in the 70s in part through soundtrack composition at Spectra Films, Valkhn Films, and the Experimental TV Lab at WNET (PBS). TV Lab provided artists with equipment to produce video pieces through an artist-in-residence program. Laurie held that position in 1976 and composed series music for the TV Lab's weekly "VTR—Video and Television Review".  She also did the audio sound effects for director David Loxton’s SF film The Lathe of Heaven, based on the novel by Ursula K. Leguin, and produced for PBS by WNET. 
Speaking of the Experimental TV Lab she said, "They had video artists doing really amazing stuff with abstract video and image processing. It was totally different from conventional animation of the hand-drawn or stop-motion action kind. Video was much more fluid and musical as a form."

Going to school and scoring for film and television wasn’t enough to satisfy Laurie’s endless inquisitive curiosity. Besides playing guitar, she’d been working with analog modular instruments by Buchla, Electrocomp, Moog and Ionic/Putney. After a few years of experimentation she outgrew these synths and started seeking something that had the control of logic and a larger capacity for memory. This led Laurie to the work being done with computers and music at Bell Labs in Murray Hill. At first she was a resident visitor at Bell Labs, someone who got the privilege of working and researching there, but not the privilege of being on Ma Bell’s payroll.

Laurie had already been playing the ALICE machine when the Bell Telephone Company needed to film someone playing it for the 50th anniversary of the Jazz Singer. She had already become something of a fixture at Murray Hill so the company hired her as a musician. Not that the engineers at Bell who created the musical instruments were unmusical, but they were engineers. Laurie had the necessary background as a composer and the interest in how technology could open up to musical expression she was the perfect fit.

In 1973 while still working on her Masters she started getting her GROOVE on at Bell Labs, using the system developed by Max Mathews and Richard Moore.

GROOVE was to prove the perfect foil for expressing Spiegel’s creative ideas. While Max Mathews was bouncing around between a dozen different departments, Laurie was getting her GROOVE on at Murray Hill.

In the liner notes to the reissue of her Expanding Universe album created with GROOVE she wrote, “Realtime interaction with sound and interactive sonic processes were major factors that I had fallen in love with in electronic music (as well as the sounds themselves of course), so non-realtime computer music didn’t attract me. The digital audio medium had both of the characteristics I so much wanted, But it was not yet possible to do much at all in real time with digital sound. People using Max’s Music V were inputting their data, leaving the computer running over the weekend, and coming back Monday to get their 30 seconds of audio out of the buffer. I just didn’t want to work that way.

But GROOVE was different. It was exactly what I was looking for. Instead of calculating actual audio signal, GROOVE calculated only control voltage data, a much lighter computational load. That the computer was not responsible for creating the audio signal made it possible for a person to interact with arbitrarily complex computer-software-based logic in real time while listening to the actual musical output. And it was possible to save both the software and the computed time functions to disk and resume work where we left off, instead of having to start all over from scratch every time or being limited to analog tape editing techniques ex post facto of creating the sounds in a locked state on tape.”

Picture
RECORD IN A BOTTLE
Laurie’s most famous work is also the one most likely to be heard by space aliens. It was a realization of Johannes Kepler’s Harmonices Mundi using the GROOVE system and was the first track featured on the golden phonograph records placed aboard the Voyager spacecrafts launched in 1977.  The records contain sounds and images intended to portray the vast diversity of life and culture on planet Earth. The records form a kind of time capsule, a message in a bottle sent off into interstellar space.

Carl Sagan chaired the committee that determined what contents should be put on the record. He said “The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space, but the launching of this 'bottle' into the cosmic 'ocean' says something very hopeful about life on this planet."

A message in a bottle isn’t the most efficient way of communicating if your purpose is to reach a specific person in short amount of time. If however, you trust in fate or providence and the natural waves of the ocean, to guide the message to whomever it is meant to be received by, it can be oracular.  

Like many musicians before her Laurie had been fascinated by the Pythagorean dream of a music of the spheres. When she set about to realize Kepler’s 17th century speculative composition, she had no idea her music would actually be traveling through the spheres. Kepler’s Harmonices Mundi was based on the varying speeds of orbit of the planets around the sun. He wanted to be able to hear “the celestial music that only God could hear” as Spiegel said.

"Kepler had written down his instructions but it had not been possible to actually turn it into sound at that time. But now we had the technology. So I programmed the astronomical data into the computer, told it how to play it, and it just ran."
           
The resulting sounds aren’t the kind of thing you’d typically put on your turntable after getting home from a hectic day to relax. The sounds are actually kind of agitating. Yet if you listen to the piece as the product of a mathematical and philosophical exercise it can still be enjoyable.

Other sounds that can be heard on the Voyager Golden Records include spoken greetings from Earth-people in fifty-five languages, Johnny B Goode by Chuck Berry, Melancholy Blues by Louis Armstrong, and music from all around the world, from folk to classical. Each record is encased in a protective aluminum jacket, and includes a cartridge and a needle for the aliens. Symbolic instructions, kind of like those for building a piece of furniture from Ikea, show the origin of the spacecraft and indicate how the record is to be played. In addition to the music and sounds there are 115 images are encoded in analog form.

Laurie was in Woodstock, New York when she received a phone call requesting the use of her music for the record. “I was sitting with some friends in Woodstock when a telephone call was forwarded to me from someone who claimed to be from NASA, and who wanted to use a piece of my music to contact extraterrestrial life. I said, 'C'mon, if you're for real you better send the request to me through the mail on official NASA letterhead!'”
           
It turned out to be the real deal and not just a prank on a musician.

​In 2012 Voyager I entered Interstellar Space. And it’s till out there running, sending back information. Laurie says, “It's extremely heartening to think that our species, with all its faults, is capable of that level of technical operation. We're talking Apple II level technology, but nobody's had to go out there and reboot them once!"
AN EXPANDING UNIVERSE

Laurie explored many other ideas within the structure of the highly adaptable GROOVE system, taking naps in the Bell Labs anechoic chamber, when she needed a rest during the frequent all-nighters she pulled to get her work out into the world.
But getting them into a fashion fit for a golden record, or more common earthbound vinyl, was not easy. The results however were worth the effort of working with a system that took up space in multiple rooms.

“Down a long hallway from the computer room …was the analog room, Max Mathew’s lab, room 2D-562. That room was connected to the computer room by a group of trunk cables, each about 300 feet long, that carried the digital output of the computer to the analog equipment to control it and returned the analog sounds to the computer room so we could hear what we were doing in real time. The analog room contained 3 reel-to-reel 1/4” two-track tape recorders, a set of analog synthesizer modules including voltage-controllable lab oscillators (each about the size of a freestanding shoe box), and various oscillators and filters and voltage-controllable amplifiers that Max Mathews had built or acquired. There was also an anechoic sound booth, meant for recording, but we often took naps there during all-nighters. Max’s workbench would invariably have projects he was working on on it, a new audio filter, a 4-dimensional joystick, experimental circuits for his latest electric violin project, that kind of stuff.

Because of the distance between the 2 rooms that comprised the GROOVE digital-analog-hybrid system, it was never possible to have hands-on access to any analog synthesis equipment while running the computer and interacting with its input devices. The computer sent data for 14 control voltages down to the analog lab over 14 of the long trunk lines. After running it through 14 digital-to-analog converters (which we each somehow chose to calibrate differently), we would set up a patch in the analog room’s patch bay, then go back to the computer room and the software we wrote would send data down the cables to the analog room to be used in the analog patch. Many many long walks between those two rooms were typically part of the process of developing a new patch that integrated well with the controlling computer software we were writing.

So how was it possible to record a piece with those rooms so far apart? We were able to store the time functions we computed on an incredibly state-of-the-art washing-machine-sized disk drive that could hold up to a whopping 2,400,000 words of computer data, and to store even more data on a 75 ips computer tape drive. When ready to record, we could walk down and disconnect the sampling rate oscillator at the analog lab end, walk back and start the playback of the time functions in the computer room, then go back to the analog lab, get our reel-to-reel deck physically patched in, threaded or rewound, put into record mode and started running. Then we’d reconnect the sampling rate oscillator, which would start the time functions actually playing back from the disk drive in the other room, and then the piece would be recorded onto audio tape.”

Every piece on her album, The Expanding Universe, was recorded at Bell Labs. She computed in real time the envelopes for individual notes, how they were placed in the stereo field and their pitches.  “Above the level of mere parameters of sound were more abstract variables, probability curves, number sequence generators, ordered arrays, specified period function generators, and other such musical parameters as were not, at the time, available to composers on any other means of making music in real time.”

Computer musicians today who are used to working with programs like Reaktor, Pure Data, Max/MSP, Ableton, Supercollider and a slew of others take for granted the ability to manipulate the sound as it is being made, on the fly, and with a laptop. Back then it was state of the art to be able to do these things, but doing it required huge efforts, and took up a lot of space.

During the height of the progressive rock music era, making music with computers was also risky business on the level of personal politics. Computers weren’t seen in a positive light. They were the tool of the Establishment, man. Used for calculating the path of nuclear missiles and storing your data in an Orwellian nightmare. Musicians who chose to work with technology were often despised at this time. There was an attitude that you were succeeding your creative humanity to a cold dead machine. “Back then we were most commonly accused of attempting to completely dehumanize the arts,” she said. This macho prog rock tenor haunted Laurie, despite her being an accomplished classical guitarist, and capable of shredding endless riffs on an electrified axe if she chose to.
​
She also took risks in her compositions inside the avant-garde circles she frequented. Her music is full of harmony when dissonance was all the rage. “It wasn’t really considered cool to write tonal music,” she said, speaking of the power structures at play in music school. All I know is that it’s a good thing she listened to the music she had inside of her.
Picture
Picture
 VAMPIRE

Between 1974-79 Laurie got the idea that GROOVE could be used to create video art with just a little tweaking of the system. Unlike the hours of music released on her Expanding Universe album, her video work at Bell didn’t get the documentation it deserved. This was in part due to the systems early demise. Hardware changes at the lab prevented many records and tracings from being left behind.

VAMPIRE however is still worth mentioning.  It stands for Video And Music Program for Interactive Realtime Exploration/Experimentation. Laurie was able to turn GROOVE into a VAMPIRE with the help of computer graphics pioneer Ken Knowlton. Ken was also an artist and a researcher in the field of evolutionary algorithms, something else Laurie would later take up and apply to music. In the 60’s Knowlton had created BEFLIX (Bell Flicks), a programming language for bitmap computer-produced movies. After Laurie got to know him they soon started collaborating together. It was another avenue for her to pursue her ideas for making musical structures visible. 
 
Laurie had reasoned that if computer logic and languages had made it possible to interact with sound in real time, than the GROOVE system should be powerful enough to handle the real time manipulation of graphics and imagery. She started working on this theory first using a program called RTV (Real Time Video) and a routine given to her by Ken. She wrote a drawing program, now similar to what would be called Paint. It became the basis on which VAMPIRE was built.

With Ken she worked out a routine for a palette of 64 definable bitmap textures. These could be used as brushes, alphabet letters, or other images. This was used inside of a box with 10 columns, each column having 12 buttons representing a bit that could be on or off. This is how she entered the visual patterns.

 In addition to weaving strands of sound Laurie was also a hand weaver. Cards with small holes in them have often been used over the years as one approach to the art form. Card weaving is a way to create patterned woven bands, both beautiful and sturdy. Some may think the cards are a simple tool, but they can produce weavings of infinite design and complexity. Hand weaving cards are made out of cardboard or cardstock, with holes in them for the threads, very similar to the Hollerith punch cards used for programming computers. She struck upon the idea that she could create punch cards to enter batches of patterns via the card reader on the computer. After she consulted some of her weaving books she made a large deck of the cards to be able to shuffle and input into the system.
  
Laurie quickly found that she enjoyed playing the drawing parameters just like someone would play a musical instrument. Instead of changing pitch, duration, timbre she could change the size, color and texture of an image, as she drew it in real time with switches and knobs making it appear on the monitor. Her skills as a guitarist directly translated to this ability. One hand would do the drawing. Perhaps it was the same as did the strumming and plucking of the strings. The other hand would change the parameters of the image using a joystick, and the other tools, just as it might change chords on one of her lutes, banjos or mandolins.

She saw the objects on the screen as melodies, but it was just one line of music. She wanted more lines as counterpoint was her favorite musical form. She wanted to be able to multiple strands of images together. She wrote into the program another realtime device to interact with. This was a square box of 16 buttons for typical contrapuntal options as applied to images. This gave her a considerable expansion of options and variables to play with. 
         
After all this work she eventually hit a wall of what she could achieve with VAMPIRE in terms of improvisation. “The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.” It had reached the point where she needed to think of composition.

Ken Knowlton’s work with algorithms was beginning to rub off on her and she started to think of how “powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to (‘overdubbed’ or ‘multitracked’), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.”
 
​Ever the night owl Laurie said of her work with the system, “Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.”

Picture
ECHOES OF THE BELL                                   

The echoes of Laurie’s time spent at Bell Laboratories can be found in the work she has done since then, even as she was devastated by the death of GROOVE and VAMPIRE.

She went on to write the Music Mouse software in 1986 for Macintonsh, Amiga and Atari computers and also founded the New York University Computer Music Studio. She has continued to write about music for many journals and publications and has continued to compose. Laurie has applied her knowledge of algorithmic composition and information theory into her work.

Now the tools for making computer music can be owned by many people and used in their own home studios, but the echo of the Bell is still heard.
--
This article only scratches the surface of Laurie's life and work. A whole book could be written about her, and I hope someone will. 
​
Sources:
http://retiary.org/ls/expanding_universe/index.html
http://retiary.org/ls/writings/vampire.html
https://www.newyorker.com/culture/culture-desk/an-electronic-music-classic-reborn
https://pitchfork.com/features/article/9002-laurie-spiegel/
https://voyager.jpl.nasa.gov/golden-record/
​
The liner notes to the 2012 reissue of Expanding Universe

Read the rest of the Radiophonic Laboratory series.
​
Picture
0 Comments

The Bell Sound 2: Taking it to the Max

7/29/2020

0 Comments

 
Picture
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.    
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.

Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey. 
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.

Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
​
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation. 
Picture
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.

In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
​
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University.  There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Picture
Max/MSP
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP.  Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.

Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.

Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.

Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.

Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today. 

 Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.

It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.

Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
 
Refernces:
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
https://en.wikipedia.org/wiki/DDP-24
https://en.wikipedia.org/wiki/Max_(software)

Read the rest of the Radiophonic Laboratory series.
0 Comments

A CLASSICAL MUDDLY: THE WORLDS WORST

7/22/2020

0 Comments

 
​One of the worst symphony orchestras ever to have existed in the world now gets the respect it is due in a retrospective book published by Soberscove Press, collecting the memories, memorabilia and photographs of its talented members.  The Worlds Worst: A Guide to the Portsmouth Sinfonia, edited by Christopher M. Reeves and Aaron Walker, though long overdue, has arrived just in time. 
Picture
For those unfamiliar with the Portsmouth Sinfonia, here is the cliff notes version: founded by a group of students at the Portsmouth School of Art in England 1970 this “scratch” orchestra was generally open to anyone who wanted to play and ended up drawing art students who liked music but had no musical training or, if they were actual musicians, they had to choose and play an instrument that was entirely new to them. One of the other limits or rules they set up was to only play compositions that would be recognizable even to those who weren’t classical music buffs. The William Tell Overture being one example, Bheetoven’s Fifth Symphony and Also Sprach Zarathustra being others. Their job was to play the popular classics, and to do it as amateurs. English composer Gavin Bryars was one of their founding members. The Sinfonia started off as a tongue-in-cheek performance art ensemble but quickly took on a life of its own, becoming a cultural touchstone over the decade of its existence, with concerts, albums, and a hit single on the charts.
           
The book has arrived just in time because one of the lenses the work of the Portsmouth Sinfonia can be viewed through is that of populism; and now, when the people and politics on this planet have seen a resurgence of populist movements, the music of the Portsmouth Sinfonia can be recalled, reviewed, reassessed and their accomplishments given a wider renown.

One way to think of populism is as the opposite and antithesis of elitism. I have to say I agree with noted essayist John Michael Greer and his frequent tagline that “the opposite of one bad idea is usually another bad idea”.  Populism may not be the answer to the worlds struggle against elitism, yet it is a reaction, knee jerk as it may be. Anyone who hasn’t been blind-sighted by the bourgeois will know the soi-distant have long looked down on those they deem lesser than with an upturned nose and sneer. Many of those sneering people have season tickets to their local symphony orchestra. They may not go because they are music lovers, but because it is a signifier of their class and social status. As much as the harmonious chords played under the guidance of the conductors swiftly moving baton induce in the listener a state of beatific rapture, there is on the other hand, the very idea that attending an orchestral concert puts one at the height of snobbery. After all, orchestral music is not for everyone, as ticket prices ensure.

 The Portsmouth Sinfonia was a remedy to all that. It put classical music back into the hands and mouthpieces, of the people. It brought a sense of lightheartedness and irreverence into the stuffy halls that were so often filled with dour, stuffy, serious people listening in such a serious way to so serious music. The Porstmouth Sinfonia made the symphony fun again, and showed that the canon of the classics shouldn’t just be left to the experts. Musical virtue wasn’t just for virtuosos, but could be celebrated by anyone who was sincere in their love of play.
Picture
Still the Sinfonia was also more than that. It was an incubator for creative musicians and a doorway from which they could launch and explore what composer Alvin Curran has called the “new common Practice”, that grab bag of twentieth century compositional tools, tricks, and approaches, from the seriality of Schoneberg to the madcap tomfoolery of Fluxus. This book shows some of these explorations through the voices of the members of the Sinfonia as they recollect their ten year experiment at playing, and being playful with, the classical hits of the ages.
 
As Brian Eno noted in the liner notes to Portsmouth Sinfonia Plays the Popular Classics, essential reading that is provided in the book, “many of the more significant contributions to rock music, and to a lesser extent avant-garde music, have been made by enthusiastic amateurs and dabblers. Their strength is that they are able to approach the task of music making without previously acquired solutions and without a too firm concept of what is and what is not musically possible.” Thus they have not been brainwashed, I mean trained, to the strict standards and world view of the classical career musician.

Gavin Bryars, who was another founding member of the orchestra speaks to this in an interview with Michael Nyman, also included in the book. He said, “Musical training is geared to seeing your output in the light of music history.” Such training is what can make the job of the classical musician stressful and stifling. Stressful because of the degree of perfection players are required to achieve, and stifling because deviation, creative or otherwise, is disavowed and un-allowed. I’m reminded of how Karlheinz Stockhausen, when exploring improvisation and intuitive music had to work really hard at un-training his classically trained ensemble of musicians in the matter of being freed from the score. 

​The amateurs in the Portsmouth Sinfonia were free from the weight of musical history. If a wrong note was played, and many were, they could just get on with it, and let it be. This created performances full of humor and happy accidents even as they tried render the music correct as notated.
​Training and discipline in music give can give a kind of perfectionists freedom as it relates to playing with total accuracy, but takes that freedom away when it comes to experimenting and exploration. Under the strictures of the conductor’s baton, playing in the symphony seems to be more about taking marching orders from a dictator than playing equally with a group of fellow musicians. John Farley, who took on the role of conductor within the Sinfonia, held his baton lightly. He wasn’t so much telling the other musicians how to play, or even keeping time, but acting out the part of what an audience expects of a conductor, acting as something of a foil for the musicians he was collaborating with in the performance.
​
One of the essential texts included in this book is “Collaborative Work at Portsmouth” written by Jeffrey Steele in 1976. His piece shows how the Sinfonia really grow out of social concerns and looking at new ways to work together. Steele’s essay allies itself from the start with the constructivist movement of art, which he had been involved with as a painter. Constructivism was more concerned with the use of art in practical and social contexts. Associated with socialism and the Russian avant-garde, it took a steely eyed look at mysticism and the spiritual content so often found in painting and music, on the one hand, and the academicism music can degenerate into on the other. The Portsmouth Sinfonia coalesced in a dialectical resolution between these two tendencies. Again, the opposite of one bad idea is usually another. The Sinfonia bypassed these binary oppositions to create a third pole. 
Picture
A version of Steele’s essay was originally supposed to be included in an issue of Christopher Hobbs Experimental Musical Catalogue (EMC). A “Porstmouth Anthology” had been planned as an issue of the Catalogue, and a dummy of the publication even made, but that edition of EMC never came out. It has been rescued here in this book. Other rescued bits include a selection of correspondence.

Besides the populist implications, and the permission given to enthusiastic amateurs to take center stage, the book explores the ideas, philosophies and development of the various artists and musicians who made up the Sinfonia itself in the recollections section of the book where Ian Southwood, David Saunders, Suzette Worden, Robin Mortimore and the groups manager and publicist Martin Lewis all reflect on their time as members. Reading these you get the sense that the whole thing was a real community effort, a collaborative effort where everyone had a role and took initiative in whatever ways they could.

​A long essay by Christopher M. Reeves, one of the editors of the book, puts the whole project into historical and critical context. Reeves writes of their “transition from intellectual deconstrunction to punchline symphony is a trajectory in art that has little precedent, and points to a more general tendency in the arts throughout the 1970s, in the move from commenting or critiquing dominant culture, to becoming subordinate to it.” His essay goes from the groups origins as a cross-disciplinary adventure to their eventual appropriation by the mainstream as a kind of novelty music you might here on an episode of Dr. Demento’s radio show. 
Just how serious was the Sinfonia supposed to be taken?

Reeve’s puts it thus, “It is within this question that the Sinfonia found a sandbox, muddying up the distinctions between seriousness and goofing off, intellectual exercises and pithy one liners.” The Sinfonia’s last album was titled Classical Muddly. The waters left behind by them are still full of silt and only partially clear. This book does a good job at straining their efforts through a sieve and presenting the reader with the material and textual ephemera the group left behind, all in a beautifully made tome that is itself a showcase of the collaborative spirit found in the Portsmouth Sinfonia. 

Robert Mortimore had told Melody Maker’s Steve Lake in 1974, “The Sinfonia came about partly as a reaction against Cardew [and his similar Scratch Orchestra]. He had the classical training and his audience was very elitist. But he wasn’t achieving anything. We listened, thought, ‘well, why don’t we have a go, it can’t be all that difficult. Y’know if Benjamin Britten and Sir Adrian Boult can do it, why can’t we?”

In this time when so many artistic and musical institutions are underfunded, the Portsmouth Sinfonia can serve as a model. By having trained musicians play instruments they did not originally know how to play, and by having untrained musicians pick an instrument and be a part of an ensemble, they showed that with diligence anyone can bring the western canon of classical music to life, and often do it with much more humor and life than can be heard in contemporary concert halls.

Just maybe people are tired of being told how to think and what to do. Or how to play an instrument, and what “good” music should be played on that instrument. The Worlds Worst is a reminder of the inspiring example of the Portsmouth Sinfonia, and the accomplishments that can be made when amateurs and in-experts take to the world’s stage and have fun making a raid on the western classical canon, wrong notes and all. 

The Worlds Worst: A Guide to the Portsmouth Sinfonia edited by Christopher M. Reeves and Aaron Walker is available from Soberscove Press.
Picture
0 Comments

The Bell Sound: from ALICE to AMY

7/7/2020

0 Comments

 
Picture
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.

Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.

Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.

It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s.  “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”

Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
​
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Picture
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.

Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.

That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.

ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.

The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.   

 Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.

Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.

Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977. 

In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.

A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
​
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence. 
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
​
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.

Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.

The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.

Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s. 
Picture
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.  

Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE.  The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention  using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
 
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article. 
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
 
References: 
http://120years.net/bell-labs-hal-alles-synthesiser-hall-alles-usa-1977/
https://en.wikipedia.org/wiki/Bell_Labs_Digital_Synthesizer
http://www.atarimuseum.com/computers/8BITS/XL/ASG/Chips/AMY/index.html
https://en.wikipedia.org/wiki/TIMARA

Read the other articles in the Radiophonic Laboratory series.
0 Comments

The Spherical Vortices of Delia Derbyshire

5/26/2020

0 Comments

 
Picture
Just as Daphne Oram was stepping out of the BBC Radiophonic Workshop, another lady was stepping in. Though Delia Derbyshire may not be a household name, the sound of her music is certainly embedded in the brains of several generations of Science Fiction fans, as she realized the iconic score for the Doctor Who theme song in the Workshop studios. With the original Doctor Who series lasting for twenty-six continuous seasons from 1963 to 1989, the song has touched the lives of millions of people around the world. I give credit to my own love of electronic music to my being a fan of Doctor Who since I was ten years old.
           
I remember the first time I watched, catching a rerun of an episode late one Saturday night on the local PBS station, while my parents and grandparents visited at my great-grandparents house and all those adults were talking and playing scrabble around the kitchen table. The show was like a revelation. It was the fifth Doctor, played by Peter Davison. Not only was the storyline a subject of fascination, but the sounds, and the way they melded with the visuals transported my imagination. I became a fan at that moment and ever since Doctor Who has been my favorite TV show. Though my first love remains the original series, and my first Doctor, the first few seasons of the 2005’s Doctor Who revival exceeded my expectations and I continue to tune in.

There is one area where I am a Doctor Who purist though. That is where the theme song is concerned. Each new regeneration of the travelling Time Lord saw the producers of the show making slight adjustments to the song. Eventually it came to a point where, though the theme was the same, they did not use the original version as recorded, and essentially, created by Delia Derbyshire. It’s quite a shame because there was magic in that mix.
The original tune was written by Ron Grainer, but he didn’t have anything to do with the production, how it was made. The project for realizing it and arranging it for electronics was given to Delia.

But how did she end up at the BBC in the first place?

She had been a bright girl, learning to read and write at an early age, and started training on the piano at age eight, but like many of us who have grown up as part of the working or middle-class it was radio that opened up her world. Delia said “the radio was my education”. Being involved with radio also ended up being her fate. After graduating from Barr’s Hill Grammar School in 1956 she was accepted by both Oxord and Cambridge. This was “quite something for a working class girl in the 'fifties, where only one in 10 were female,” she said. She ended up going to Girton College, Cambridge, because of a mathematics scholarship she had received.

Despite some success with the mathematical theory of electricity, she claimed to have not done so well in school at the time. So she switched her focus to include music, specializing in medieval and modern music history, while graduating with a BA in mathematics. She also received a diploma, or what the British call a licentiate, from the Royal Academy of Music in the study of pianoforte.

While in school she had developed an interest in the musical possibilities of everyday objects. This would later find its full expression in the musique concrete she would make and master at the BBC. While still in school in 1958 she also had the opportunity to visit the Worlds Fair in Brussels where she experienced Edgard Varèse's Poème Électronique installed in Le Corbusier's pavilion. Varèse's work was a touchstone for the new generation of electronic musicians as Daphne had also experienced this work at the Fair.

Upon finishing her schooling she approached the university career office for advice. The pieces had been arranged on the board of her life but she needed help with making her next move. She told the counselor she had an interest in “sound, music and acoustics, to which they recommended a career in either deaf aids or depth sounding.” With their advice wanting, she made a move on her own and tried to get a gig at Decca Records, but was told no. No women were employed in the recording studio of the label.

In lieu of a job with Decca she scored a position with the UN in Geneva as a piano and math teacher to the children of various consuls and diplomats. Later she worked as an aid to Gerald G. Gross, who worked in diplomatic functions and oversaw conferences for the International Telecommunications Union. Eventually she moved back home to Coventry where she taught at a primary school. This was followed by a brief stint in the promotions department at Boosey & Hawkes, a music publisher.

The following year in 1960 she stepped into the BBC as trainee assistant studio manager. Her first job there was working on the Record Review, a program where hoity-toity critics gave their highfalutin opinions on classical music recordings. Just like Daphne Oram, she had a well-developed sense of where to drop the needle on any given platter. Delia said "some people thought I had a kind of second sight. One of the music critics would say, ‘I don't know where it is, but it's where the trombones come in’ and I'd hold it up to the light and see the trombones and put the needle down exactly where it was. And they thought it was magic."

Of this time period she further elaborated, “It was very exciting, especially on the music shows. All the records had to be spun in by hand and split second timing was essential. When tapes came in I used to mark them with yellow markers to ensure that one followed another, and that there were no embarrassing gaps in between,”

Not long after she had started working on the Record Review she heard about the Sound-House Daphne Oram had helped create, the Radiophonic Workshop, and she knew she wanted to be in the Sound-House, developing and working in the new field of electronic and electro-acoustic music, exploring the widest parameters of musical research.

When she approached the heads of Central Programme Operation with her wish to work in the Radiophonic Workshop, they were baffled and puzzled. The Workshop wasn’t a place most people sought out to work in, it was a place people were assigned, no doubt with grumbling resentment. It was a place only the eccentric, or visionary, would choose to go.
“I had done some composing but I had a running battle with the B.B.C. to let me specialise in this field. Eventually they gave me three months to prove I was good -- and I'm still here,” she noted in a newspaper article.

In 1962 Delia got here wish and was assigned to the Radiophonic Workshop in Maida Vale. For the next decade and a year she gave the BBC a herculean effort in the creation of sound and music for about 200 radio and television shows.

​“I have to sense the mood which the producer is trying to achieve. He may want something abstract, or it may be a piece with changing moods which have to correspond to specific cues in either dialogue or graphic designs.” 
The next year was the year Doctor Who came to broadcast. The theme song was one of the first on television to be made entirely with electronics.

Brian Hodgson, who worked with Delia at the Workshop, and also produced a lot of incidental music for Doctor Who commented on her work on the theme. “It was a world without synthesisers, samplers and multi-track tape recorders; Delia, assisted by her engineer Dick Mills, had to create each sound from scratch. She used concrete sources and sine- and square-wave oscillators, tuning the results, filtering and treating, cutting so that the joins were seamless, combining sound on individual tape recorders, re-recording the results, and repeating the process, over and over again.”

Interviewed about the theme on a 1964 episode of the radio show Information Please she said, “the music was constructed note by note without the use of any live instrumentalists at all,” and went on to demonstrate the use of various oscillators, including the workshops famous wobbulator, which she said was “simply an oscillator which wobbles”.

It was a laborious process and the Radiophonic Workshop had become the perfect laboratory for the great works of sonic separation, granulation, elaboration and final distillation of the musical substance.

To create the Doctor Who theme each note was individually recorded, cut, spliced. Some of the base materials used for the process included a single plucked string, white noise, and the harmonic waveforms of test-tone oscillators. The bass line was the single plucked string. The pattern for the bass was made by splicing it, in versions that had been sped up or slowed down to create the perfect pitch, over and over again. The swoop of the lower bass layer was made through careful and calculated tweaking of the oscillators pitch. The melody was played on a keyboard attached to a rack of oscillators while the bubbling hiss and fry of some etheric vapor was made by filtering white noise and then arranging it in time on tape. Some of the notes were also redubbed at varying volumes to create the necessary dynamics heard in the song.

With all the basic materia in the laboratory now prepared, ready with the proper pitch and volume, it all needed to be conjoined. To do this the first step involved taking a line of music –the bass, melody, or vaporous bubbles of white noise- and trimming each note to length by cutting the tape and sticking them all together in the right order. Next further rectifications were required, distilling these elements down further and further until a final mix was completed.
 
At the time, there were no multitrack tape machines to ease the process. A method to mix it all together had to be improvised. Each separate portion of the song on individual reels of tape  was played on separate tape machines with the outputs mixed together. Getting it all to synchronize was just one of the obstacles as not all tape players play back at exactly the same speed, and not all of them stay in sync once started. A number of submixes, or distillations, were created and these in turn synced together before the music could finally be said to be finished. 
   
When Ron Grainer first heard Delias realization of his score he was more than delighted and said "Did I really write this?"

Delia relplied,"Most of it."
Picture
Grainer made a valiant effort to give Delia credit as a co-composer of the theme. His attempt was blocked by the bureaucrats at the BBC who had the official policy of keeping the members of the Workshop anonymous and only giving credit to the group as a whole. Delia was not credited on screen for her work until the 50th anniversary special of Doctor Who.

Even so, her tenure in the Workshop was off to a grand start and she continued to produce music for radio, television and beyond.   

Between 1964-65 Delia got to expand her palette of sound across the canvas of radio in collaboration with playwright Barry Bermange in a series of four pieces called Inventions for Radio. These pieces were broadcast on the BBC’s Third Programme and involved interviews with people on the street on such heavy subjects as dreams and the existence of God, collaged against a background of electronic soundscapes and strange noises. It was a new form of documentary radio art.

Working with Bermange, the voices of the interviewees were edited in a non-linear way, creating insightful juxtapositions. For the episode on dreams she used one of her favorite musical sources, a green metal lightbulb shade being struck.  The sound, as always, was later manipulated in the studio.     

And even though her work for the Workshop continued to remain anonymous her reputation as a musician and electronic composer started to spread to some of the senior officials at the communications behemoth. Martin Esslin, the Head of Radio Drama, sent a memo to Desmond Briscoe, than head of the Workshop, noting his regret that Delia Derbyshire and her co-worker John Harrison were not able to receive credit for the work they had on a production of “The Tower”.

He wrote, “I have just been listening to the playback of the completed version of ‘The Tower’ and should like to express my deep appreciation for the excellent work done on this production by Delia Derbyshire and John Harrison. This play set them an extremely difficult task and they rose to the challenge with a degree of imaginative intuition and technical mastery which deserves the highest admiration and which will inevitably earn a lion's share of any success the production may eventually achieve. I only wish that it were possible for the names of contributors of this calibre to be mentioned in the credits in the Radio Times and on the air. But failing this I should like to register the fact that I regard their contribution to this production as being at least of equal importance to that of the producer himself.”
UNIT DELTA PLUS, KALEIDOPHON & WHITE NOISE
Picture
As Delia’s reputation grew, she began work on other projects outside the umbrella of the BBC. She joined forces with her friend and fellow Radiophonic Workshop member Brian Hodgson, along with Peter Zinovieff, the creator and founder of the EMS synthesizer, to establish Unit Delta Plus. The purpose of this organization was to promote and create electronic music. A studio Zinovieff had built in a shed behind his townhouse at 49 Deodar Road in Putney served as their operational headquarters.

Zinovieff had followed the research of Max Mathews and Jean-Claude Risset at Bell Labs. He had also read the David Alan Luce MIT thesis from 1963, “the Physical Correlates of Nonpercussive Musical Instrument Tones.” You know, the kind of thing you read on a rainy day. These were some of the influences on his own work. The three were quite the trio.

They participated in a few experimental and electronic music festivals. In 1966 they demonstrated their electronic prowess at The Million Volt Light and Sound Rave. This was the same event where The Beatles had been commissioned to create an avant-garde sound piece. They came up with song Carnival of Light in response had its only public playing.

Though there were intervening projects, the next major one outside of the BBC was to mark another landmark in the history of electronic music. It all get sparked when Derbyshire and Hodgson met David Vorhaus.

Vorhaus recalls, “I met Brian Hodgson and Delia Derbyshire, who were then in a band called Unit Delta Plus. I was on my way to an orchestral gig when the conductor told me that there was a lecture next door on the subject of electronic music. The lecture was fantastic and we got on like a house on fire, starting the Kaliedophon studio about a week later!"

Vorhaus was a classical musician, trained as a bass player. He also happened to be a physics graduate and electronic engineer. The three were an electrical storm of creative energy. Together they created the Kaleidophon studio at 281-283 Camden High Street, where they made music and sound for a variety of London theatres. They also made library music, contributing many tracks to the Standard Music Library, a firm set up in collaboration with London Weekend Television (ITV) and Bucks Music Group in 1968 to provide the music for hit TV shows. These recordings were done under pseudonyms. Derbyshire’s compositions were credited to Li De La Russe, something of an anagram with a reference to her auburn hair to boot. A number of these songs made it onto the ITV shows The Tomorrow people and Timeslip, which rivaled Doctor Who.

When not working on a commission they worked on their first album as the band White Noise, titled An Electric Storm.

The album is a masterpiece, spanning genres of giddy electro-pop to the more austere and serious sonorities. It spans a deep emotional gamut and is an excellent and dizzying listen from start to finish. Released on the Island label, it was something of a sleeper album, or what some call a perennial seller. It is one of those albums that didn’t do as great when it was first released as it has done over time. Now it is a continual best seller. Considering the difficulties the band had in even getting it onto a label makes their achievement even more remarkable.

​Though the name White Noise lives on with David Vorhaus, Hodgson and Delia left the project and the studio after the first album.  
MUSIC OF SPHERES AND I.E.E.100​
A number of other commissions, recordings and events took place as the last years of the sixties unspooled. She made music for a film by Yoko Ono, contributed to Guy Woolfenden’s electronic score for Macbeth produced by the Royal Shakespeare Company and collaborated with Anthony Newley for a demo song called Moogies Bloogies that has never seen an official release.  

In 1970 Delia worked on an episode for the TV show series Biography that detailed the life of Johannes Kepler, the renaissance astronomer who showed that planets orbit the sun in ellipses, not perfect circles. The episode was titled, I measured the Skies and was taken from his epitaph which read:

                              I measured the skies, now the shadows I measure,
                             Sky-bound was the mind, earth-bound the body rests.


In his book Harmonices Mundi from 1619 Kepler explored the relationships between musical harmony and congruence in geometrical forms and physical phenomena and related his third law of planetary motion.

Medieval philosophers had spoken of the music of the spheres as metaphor. Kepler discovered actual physical harmonies in planetary motion, finding harmonic proportions in the differences between the maximum and minimum angular speeds of a planet in its orbit.

A newspaper article by Christine Edge that came out around the time explained, “Kepler had interpreted the sounds made by the planets into scale notes, and Delia subjected them to her own gliding scale of electronic sounds.” A few years later she revisited the Music of the Spheres, this time producing a piece for a segment on Kepler in Joseph Bronowski's 1973 TV series The Ascent of Man. Her short piece accompanies a simple computer graphic being shown on the screen.

Delia was in her own sphere and orbit, and as her velocity accelerated the people around started to notice its wobble.

In 1971 the International Institute of Electrical Engineers turned 100. The BBC commemorated the anniversary with the Radiophonic Workshop in Concert event on the 19th May. Delia composed the piece I.E.E. 100 for the program, but the tape almost didn’t survive. She looked to radio and the history of electrical engineering for inspiration.

She said, “I began by interpreting the actual letters, I.E.E. one hundred, in two different ways. The first one in a morse code version using the morse for I.E.E.100. This I found extremely dull, rhythmically, and so I decided to use the full stops in between the I and the two E's because full stop has a nice sound to it: it goes di-dah di-dah di-dah.
I wanted to have, as well as a rhythmic motive, to have musical motive running throughout the whole piece and so I interpreted the letters again into musical terms. 'I' becomes B, the 'E' remains and 100 I've used in the roman form of C."

Further elements of the piece included many touchstones of the history of telecommunications from, the development of electricity in communication from the earliest telephone to the Americans landing on the moon. She sampled the voice of Mr Gladstone congratulating Mr Edison on inventing the phonograph, used the opening and closing down of Savoy Hill, where the BBC had their initial recording studios with the voice of Lord Reith, the first general manager of the BBC, and Neil Armstrong speaking as he stepped onto the surface of the moon.

“The powerful punch of Delia's rocket take-off threatened the very fabric of the Festival Hall,” Desmond Briscoe wrote.

This was one of the events where Delia’s chronic perfectionism began to show itself, having a deleterious effect on her ability to finish work, despite being a professional who had tackled numerous large projects. She was working on the piece up to the last minute the night before the event, making edits, trying to make it live up to the rigorous standards she set for herself. Brian Hodgson was in charge of directing the program, and he was aware that Delia might have a breakdown and do something to the tape, so he called upon one of the Workshops engineers to secretly make a second copy of the final version of the work and to give it to him.

Hodgson’s intuition and assessment of the matter was quite correct. He said of the incident, “I said to Richard [the engineer] ‘Run another set in Room 12, don't tell Delia you're doing it, and that copy bring to me in the morning, because I have an awful feeling she was going to destroy the tape.’ And he did that. And she came in the next morning in tears, around 11 o'clock. And said, ‘I've destroyed the tape, what are we going to do?’ I don't think she ever forgave me for that.”

Two years later she would leave the BBC, fed up. In an interview on Radio Scotland she said, “Something serious happened around '72, '73, '74: the world went out of tune with itself and the BBC went out of tune with itself... I think, probably, when they had an accountant as director general. I didn't like the music business.”

She spent a brief time working at Brian Hodgson’s Electrophon Studio, before quitting that too. It was hard for her to quit radio though, as it is for many who’ve been hooked and tried to give it up. She got a gig working as a radio operator. She says of the time,  “Crazy, crazy, crazy! I was the best radio operator Laing Pipelines ever had! I answered a job in the paper for a French speaking radio operator. I just had to sleep - everything was out of tune, so I went to the north of Cumbria. It was twelve miles south of the border. I had a lovely house built from stones from Hadrian's Wall. I was in charge of three transmitters in a disused quarry. I did not want to get involved in a big organisation again. I'd fled the BBC and I thought - oh, Laing's... a local family firm! Then I found this huge consortium between Laing's and these two French companies.”

By 1975 she’d stopped producing music for public consumption. According to Clive Blackburn, “in private, she never stopped writing music either. She simply refused to compromise her integrity in any way. And ultimately, she couldn't cope. She just burnt herself out. An obsessive need for perfection destroyed her."

Yet in the 1990’s she started seeing the electronic music she had championed starting to come into its own. Pete Kember, a member of the psychedelic noise rock band Spacemen 3 sought Delia out and befriended her. Kember had amassed a collection of synthesizers and electronic music gear as part of his musical research and interest. He was embarking on a new project called Spectrum making the kind of music she had been at the forefront of in previous decades.

Delia’s life had become chaotic though. The ravages of alcohol abuse were catching up with her body. Just as she started to work on public music again with Peter in 2001, she died of renal failure. A short 55-second collaboration they had made, called Synchrodipidity Machine (Taken from an Unfinished Dream) was released after she had departed and was dedicated to her memory. Kember credited her with  "liquid paper sounds generated using fourier synthesis of sound based on photo/pixel info (B2wav - bitmap to sound programme)."

After she died 267 reel-to-reel tapes and a box of a thousand papers were found in her attic. These were entrusted to Mark Ayres of the BBC and in 2007 were given on permanent loan to the University of Manchester. Almost all the tapes were digitised in 2007 by Louis Niebur and David Butler, but none of the music has been published due to copyright complications.

​Her life was an unfinished dream, and it is a shame she did not stick around long enough to see the credit that was later bestowed on her for her generous contributions to electronic music. 
Sources:
The BBC Radiophonic Workshop: The First 25 Years by Desmond Briscoe, BBC 1983

Special Sound: The Creation and Legacy of the BBC Radiophonic Workshop by Louis Neibur, Oxford, 2010
 
https://www.bbc.com/historyofthebbc/100-voices/pioneering-women/women-of-the-workshop/delia-derbyshire
http://www.delia-derbyshire.org/
https://www.bbc.co.uk/programmes/profiles/51LC2shThjnCNR8dd4z2SRQ/delia-derbyshire
https://wikidelia.net/
https://wikidelia.net/wiki/Morse_code_musician

Sound Archive:
http://www.ubu.com/sound/derbyshire.html

If you liked this article check out the rest in the Radiophonic Laboratory series.
0 Comments

The Sound-Houses of Daphne Oram

4/27/2020

0 Comments

 
Picture
As co-founder of the BBC Radiophonic Workshop –the unit created in 1958 that produced sound effects, incidental sounds and music for radio and television –Daphne Oram held a key place in the history of electronic music. Alongside F.C. Judd she was one of the first proponents of musique concrète in the UK. Her development of the Oramics system, a drawn sound making technique that involves inscribing waveforms and shapes directly onto 35mm film stock, also made her an innovative, if arcane, inventor of new musical technology.  Daphne also gets the credit for being the first woman to design and construct a musical instrument, and the first to set up an independent personal electronic music studio.

Oram was born to James and Ida Oram on 31 December 1925 in Wiltshire, England. She was taught music at an early age, starting with piano and organ before moving on to composition.  Her father was a coal merchants manager, but was also an amateur archaeologist, and during the 1950s was president of the Wiltshire Archaeological Society. Here childhood home was within 10 miles of the stone circle of Averbury and 20 miles from Stonehenge. Her mother was an amateur artist. It seems that her parents interest in history and the arts lent itself to Daphne’s blossoming in the field of music and technology.

At the age of seventeen the young Daphne was offered a place at the Royal College of Music but chose instead to take on a Junior Studio Engineer position at the BBC. She worked in part behind the scenes during live concerts at Albert Hall to ‘shadow’ the musicians, being ready to play a pre-recorded version of the music for broadcast in the event the radio was disrupted by the enemy actions of the Germans –not an unlikely fear just a year after the Blitz.
  
Graham Wrench was just a lad at the time but got to know Daphne through his father who was a musician in the London Symphony Orchestra.  Many years he worked with Daphne as an engineer on her Oramics system. He said of her work for the BBC at the time, "Daphne's job involved more than just setting the levels. She had a stack of records, and the printed scores of whatever pieces the orchestra was due to play. If anything went wrong in the auditorium she was expected to switch over seamlessly from the live orchestra to exactly the right part of the record!”

Her other duties included the creation of sound effects for radio shows as well as keeping the broadcast levels of sound balanced and mixed. It was during this time period that she started to become aware of new developments in synthesized sound and started to make her own experiments with tape recorders late into the night, staying to work in the BBC studios long after her co-workers and colleagues had popped off to the pub or gone home for the evening. Cutting, splicing, playing backwards, looping, speeding up and slowing down, were all tape techniques she learned and became expert at.
​   
In the 1940’s she also composed an orchestral work that is now considered by some to be the first electro-acoustic composition. The piece was titled Still Point and involved the use of turntables, a double orchestra, and five microphones. The BBC rejected the piece from their programming schedule and it remained unheard for seventy years. It was resurrected by Shiva Feshareki who performed it with the London Contemporary Orchestra for the first time on June 24, 2016. A revised version was performed again by Fesharek and the LCO alongside James Bulley following Oram’s composition notes. 
Picture
We Also Have Sound-Houses
Despite the rejection of her innovative score the BBC promoted her to become a music studio manager in the 1950s. It was around this time she travelled to RTF studios in Paris where Pierre Schaeffer had been hard at work in his development of musique concrète. Daphne began a crusade for the creation of a studio at the BBC dedicated to the creation of electronic and musique concrete for use in radio and television programs. She demonstrated her vision of what this music could be when she was commissioned to compose music for the play Amphitryon 38 in 1957, producing the BBC’s first entirely electronic score. It was made using a sine wave oscillator, self-designed filters, and a tape recorder.
The production and piece were a success and these led to further commissions for electronic music. Fellow work colleague and electronic musician Desmond Briscoe also started to receive commissions for a number of other productions. One of the most significant was a request for electronic music to accompany Samuel Beckett’s All that Fall, which also was produced in 1957. The demand for electronic music was there, and the BBC finally gave in, giving Oram and Briscoe the go-ahead, and the budget, to establish the BBC Radiophonic Workshop.
The focus of the Workshop was to provide sound effects and theme music for all of the corporation's output, including the science fiction serial Quatermass and the Pit (1958–59) and "Major Bloodnok's Stomach" for the radio comedy series The Goon Show.
One of Daphne’s guiding stars at the workshop came from a passage in the unfinished utopian and proto-science fiction novel The New Atlantis penned by Sir Francis Bacon in . The novel depicts the crew of a European ship lost at sea somewhere in the Pacific west of Peru. Eventually the reach a mythical island called Bensalem. There isn’t much plot in the book, but the set up allowed Bacon to reveal his vision of an age of religious tolerance, scientific inquiry, and technological progress. In the New Antlantis Solomon’s House is a state-sponsored scientific institution that teases out the secrets of nature and investigates all phenomena, including music and acoustics. His book went on to form the basis for the establishment of the Royal Society. Daphne found one passage in the book to be both prophetic, as well as something of a mission statement. She posted the following passage from the book on the door of the Radiophonic Workshop:
“We have also sound-houses, where we practice and demonstrate all sounds and their generation. We have harmonies, which you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have, together with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which set to the ear do further the hearing greatly. We also have divers strange and artificial echoes, reflecting the voice many times, and as it were tossing it, and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice differing in the letters or articulate sound from that they receive. We have also means to convey sounds in trunks and pipes, in strange lines and distances.”
 Yet even before a year was out her own ambition for the sound-house she had worked so hard to establish, came at loggerheads with the station executives. The inciting incident seemed to be her attendance at the Brussels World’s Fair and the Journées Internationales de Musique Expérimentale exhibition she was sent to attend. It was there where she heard Edgard Varèse demonstration of his ground breaking Poème électronique. And she heard other electronic music that was pushing the boundaries of the possible further.
This exalting experience created a deep dissatisfaction in her when she returned to work and the music department refused to put electronic music at the forefront of their activities and agenda. The realm of the possible had smacked up against the wall of the permissible. So Daphne resigned from the workshop with the hope of establishing her own studio.
In the hindsight of an outsider it seems this move may not have been the most strategic. Yet it did give her the freedom to develop her own electronic music instrument, Oramics, ill-fated as it was on a practical level. ​
Picture
Oramics

Immediately after leaving the BBC in 1959, Oram began setting up her Oramics Studios for Electronic Composition in Tower Folly, in a former oasthouse  (a building designed for drying hops prior to brewing) near Wrotham, Kent. The technique she created there involved the innovative use of 35mm film stock. Shapes drawn or etched onto the film strips could be read by photo-electric cells and transformed into sounds. 

According to Oram, "Every nuance, every subtlety of phrasing, every tone gradation or pitch inflection must be possible just by a change in the written form."

While innovative, the Oramics technique was also expensive and Daphne met the financial pressure of having her own studio by opening it up and working as a commercial composer. Being director of the studio gave her complete control and freedom to experiment, but it also meant dealing with the stress of making economically viable. For the first few years she made music for commercial films, sound installations and exhibits as well as material for television and radio. She made the electronic sounds featured in Jack Clayton’s 1961 psychological horror film The Innocents.  She also collaborated with opera singers created material for concert works. 

These pressures eased in 1962 when she was given a grant of £3,550 (equivalent to £76,000 in today’s money). She was able to put more effort into building her drawn sound instrument. 

In 1965 she reconnected with Graham Wrench, a few years after she had bumped into him at the IBC recording studio where she had brought in some tape loops for a commercial. She was in need of an engineer and technician and asked Wrench if he wanted the job, so he drove down with his wife to check things out. 

Graham said of the visit, “on a board covering a billiard table in an adjoining reception room was displayed the electronics for Oramics. There wasn't very much of it! She had an oscilloscope and an oscillator that were both unusable, and a few other bits and pieces — some old GPO relays, I remember. Daphne didn't seem to be very technical, but she explained that she wanted to build a new system for making electronic music: one that allowed the musician to become much more involved in the production of the sound. She knew about optical recording, as used for film projectors, and she wanted to be able to control her system by drawing directly onto strips of film. Daphne admitted the project had been started some years before, but no progress had been made in the last 12 months. I said I knew how to make it work, so she took me on. I left my job with the Medical Research Council and started as soon as I could.”

Graham was able to help her build the system up, drawing on his experience as a radar specialist in the RAF. He started by designing a time-base for the waveform generator. To do this he needed to make his own photo-transistors which were too expensive to buy commercially, by scraping off the paint of regular transistors, still pricey at the time as they had only been on the market a few years.  

The waveform-generator itself worked in the same fashion as an oscilloscope, but in reverse.  It used a “six‑inch CRT [Cathode Ray Tube] mounted inside a lightproof box, with a 5x4‑inch photographic slide carrier fixed to the front of its screen. Mounted some distance in front of the CRT was a photomultiplier tube, arranged so as to detect light from anywhere on the screen. In the slide carrier was placed a transparency with an image of the required waveform; but this was not, as generally believed, simply a line drawing. The shape was filled in with solid black below the line and was left transparent above it, looking rather like the silhouette of a mountain range.

Across the bottom of the CRT screen a dot of light was made to trace a horizontal line by scanning repeatedly from left to right along the 'X' axis. If the beam happened to be obscured by the lower, opaque part of the drawn waveform, no light would be detected by the photomultiplier tube. If so, the beam was told to move higher up the screen until the photomultiplier could see it. In this way the moving dot of light was forced to follow exactly whatever profile was drawn on the transparency. Altering the voltage of the CRT's Y‑axis deflection plates controlled the up and down movement of the dot. The charge on these plates is very high — usually several hundred Volts. But if fluctuations in the Y‑axis voltage were scaled down to within just a Volt or so, it could be connected to an audio amplifier… And that is exactly how the Oramics machine generated its sound: the audio output was tapped off the Y-axis voltage of the CRT.

Whatever shape was placed in front of the screen became just one cycle of a repeating waveform. The speed at which the dot of light travelled across the screen on the 'X' axis was controlled by the time‑base unit, and was adjustable over a very large range so that the speed of the scan dictated the frequency of the sound it produced. If the beam travelled across the screen 440 times every second, it would scan the drawn waveform 440 times, producing a pitch of 440 Hertz, or the 'Concert A' above middle C.”

He had also created an analog digital system by dividing the film into four usable tracks, “each of which can be set to on or off by putting a spot of paint in the appropriate place on the film, to be read by a photo‑cell. Remember how the binary system works? Well, if each strip of film has four tracks, we can use them as four places of binary digits. The track on the lower edge of the film does nought or one; the next one up does nought and two; the next does nought and four; the top‑most track does nought and two again: hence, weighted binary. So it's very simple to 'program' each strip of film with a number — it only has to be between nought and nine — just by painting up to four spots on the film.

"Imagine that you've put a waveform picture in the scanner. If you'd like that sound to play at a frequency of 440 Hertz, then you go first to the strip of film that programs the hundreds of cycles per second. There are four available film‑strips of four tracks each, so just put a spot on the third track of the third film from the bottom (the hundreds). Then go to the film strip that programs the tens of cycles per second, and do the same. That's it — you've programmed 440 Hertz! When the film is run, those two spots of paint will be read by the photo‑cells, which in turn, control latching relays that switch in banks of resistors and make the time‑base run at whatever frequency. So you see, it is digitally controlled — but not how you'd imagine it! I know it seems a strange way to play a tune, but with a bit of practice it becomes quite intuitive.”

He also developed the means to control volume with the system by means of an optical system where a light is faded up and down to change the audio level by means of a photo‑resistor. He also figured out how to create tremolo and vibrato. The system had become vary flexible in the sonics it was able to produce. Being able to draw a sound gave amazing freedom in creating rich envelopes of music. 

Sadly Graham, who had done so much to develop the system, was let go by Daphne following an illness that some believed had been a brain hemorrhage, but which was never fully diagnosed. Graham believed it was a nervous breakdown caused by her long working hours and perhaps the 5hz subharmonic frequencies caused by the Oramics machine, which he later fixed by adding a high pass filter to remove the subsonics. The reason for his release was never made clear. It was a real shame because Graham had done a lot of work to get the system as she had envisioned it in place.  

Other engineers and technicians came in and copied what he had done to expand the Oramics system while Daphne continued to compose, research, and think about the implications of electronic music from a philosophical perspective. She turned her attention to the subtle nuances of sound that composers using traditional instruments had never been able to control before. She applied this research to the study of perception itself, and how the human ear influences the way the brain apprehends the world. Oramics came to encompass a study of vibrational phenomena, and she divided her system into two distinct parts the commercial and the mystical. 

In her detailed notebooks Daphne defined Oramics as "the study of sound and its relationship to life."

Over the decades Daphne had lectured on electronic music and studio techniques. 
Throughout her career, Oram lectured on electronic music and studio techniques. In the early seventies she was commissioned to write a book on electronic music. She didn’t want it become a how-to book, so instead took a philosophical and meditative approach to the subject.  An Individual Note of Music, Sound and Electronics was published in 1972 and reissued in 2016.

Later in the 1970s Oram began a second book, which never saw print but survives as a manuscript. Titled, The Sound of the Past - A Resonating Speculation, in this work the influence of her fathers interest in archaeology can be seen. In it she speculates and muses on the subject of archaeological acoustics and proposes a theory, backed by research, suggesting that Neolithic chambered mounds and ancient sites like Stonehenge and the Great Pyramid in Egypt were used as resonators, and could be used to amplify sound. Her research suggested that ancient peoples, through their knowledge of sound and acoustics, may have been able to use these places for long distance communication. 

By the time the 1980s rolled around she was engaged by the Acorn Archimedes computer company to work on the development of a software version of Oramics for their machine, receiving a grant from the Ralph Vaughan Williams Trust. She had wished to continue the mystical side of her sound research, but the continuing financial struggles for such a project left that dream mostly unfulfilled. 

In the 1990’s Oram suffered from two strokes that eventually led her away from her work and into a nursing home. She died in 2003.

In her book Daphne wrote, "We will be entering a strange world where composers will be mingling with capacitors, computers will be controlling crotchets and, maybe, memory, music and magnetism will lead us towards metaphysics."

It is true we are living in that strange world where computers control and Internet of Things, and smart fabrics are weaved by machines. It remains to be seen if the philosophers and spiritually minded musicians of today will marry their love of all things electrical and electromagnetic with the long memory necessary for us to understand the fundamental nature of reality.  


​--SOURCES
An archive of her recordings can be listened to free here: http://www.ubu.com/sound/oram.html
A contemporary reinterpretation of her music from the BBC archives can be found here: https://ecstaticrecordings.bandcamp.com/album/sound-houses
Sources:
http://www.ubu.com/historical/oram/index.html
https://publicdomainreview.org/essay/cat-pianos-sound-houses-and-other-imaginary-musical-instruments
https://www.theguardian.com/music/2008/aug/01/daphne.oram.remembered
https://www.soundonsound.com/people/graham-wrench-story-daphne-orams-optical-synthesizer
https://frieze.com/article/music-15
https://en.wikipedia.org/wiki/Daphne_Oram
0 Comments

Musician of Sounds: Noise, Pierre Schaeffer, and Musique Concrète

4/9/2020

0 Comments

 
Picture
IS THERE ANY ESCAPE FROM NOISE?

​In our machine dominated age there is hardly any escape from noise. Even in the most remote wilderness outpost planes will fly overhead to disrupt the sound of the wind in the trees and the birds in the wind. In the city it is so much part of the background we have to tune in to the noise in order to notice it because we’ve become adept at tuning it out. Roaring motors, the incessant hum of the computer fan, the refrigerator coolant, metal grinding at the light industrial factory down the street, the roar of traffic on I-75, the beep of a truck backing up, these and many other noises are all part of our daily soundscape.
​
Throughout human history musicians have sought to mimic the sounds around them, the gentle drone of the tanpura, a stringed instrument that accompanies sitar, flute, voice and other instruments in classical Indian music, was said to mimic the gentle murmur of the rivers and streams. Should it be a surprise then, that in the nineteenth and twentieth century musicians and composers started to mimic the sounds of the machines around them? In bluegrass and jazz there are a whole slew of songs that copied the entrancing rhythms of the train. As more and more machines filled up the cities is at any wonder that the beginnings of a new genre of music –noise music- started to emerge? Is it any wonder, that as acoustic and sound technology progressed, our music making practices also came to be dominated by machines. 
Picture
THE ART OF NOISES

And just what is music anyway? There are many definitions from across the span of time and human culture. Each definition has been made to fit the type, style and particular practice or praxis of music.  

In his 1913 manifesto The Art of Noises the Italian Futurist thinker Luigi Russolo argues that the human ear has become accustomed to the speed, energy, and noise of the urban industrial soundscape. In reaction to those new conditions he thought there should be a new approach to composition and musical instrumentation. He traced the history of Western music back to Greek musical theory which was based on the mathematical tetrachord of Pythagoras. This did not allow for harmony. This changed during the middle-ages first with the invention of plainchant in Christian monastic communities. Plainchant employs the modal system and this is used to work out the relative pitches of each line on the staff, and was the first revival of musical notation after knowledge of the ancient Greek system was lost. In the late 9th century, plainsong began to evolve into organum, which led to the development of polyphony. Until then the chord did not exist, as such.

Russolo thought that the chord was the "complete sound." He noted that in history chords developed slowly over time, first moving from the "consonant triad to the consistent and complicated dissonances that characterize contemporary music." He pointed out that early music tried to create sounds that were sweet and pure, and then it evolved to become more and more complex. By the time of Schoenberg and the twelve tone revolution of serial music musicians sought to create new and more dissonant chords. These dissonant chords brought music ever closer to his idea of "noise-sound."

With the relative quiet of nature and pre-industrial cities disturbed Russolo thought a new sonic palette was required. He proposed that electronics and other technology would allow futurist musicians to substitute for the limited variety of timbres available in the traditional orchestra. His view was that we must "break out of this limited circle of sound and conquer the infinite variety of noise-sounds." This would be done with new technology that would allow us to manipulate noises in ways that never could have been done with earlier instruments. In that, he was quite correct.

Russolo wasn’t the only one thinking of the aesthetics of noise, or seeking new definitions of music. French Modernist composer Edgar Varèse said that “music is organized sound.” It was a statement he used as a guidepost for his aesthetic vision of "sound as living matter" and of "musical space as open rather than bounded". Varèse thought that "to stubbornly conditioned ears, anything new in music has always been called noise", and he posed the question, "what is music but organized noises?" An open view of music allows new elements to come into the development of musical traditions, where a bound view would try to keep out those things out that did not fit the preexisting definition.
​
Out of this current of noise music initiated in part by Russolo and Varese a new class of musician would emerge, the musician of sounds.
Picture
MUSICIAN OF SOUNDS
​
Fellow Frenchmen Pierre Schaeffer developed his theory and practice of musique concrète during the 1930s and ‘40s and saw it spread to people such as Karlheinz Stockhausen, the founders of the BBC Radiophonic Workshop, F.C. Judd and many others in the 50’s. Musique concrète was a practical application of Russolo’s idea of “noise-sound” and exploration of expanded timbres possible through then new studio techniques. It was also a way of making music according to the “organized sound” definition and was distinct from previous methods by being the first type of music completely dependent on recording and broadcast studios.
In musique concrète sounds are sampled and modified through the application of audio effects and tape manipulation techniques, then reassembled into a form of montage or collage. It can feature any sounds derived from any recordings of musical instruments, the human voice, field recordings of the natural and man-made environment or sounds created in the studio. Schaeffer was an experimental audio researcher who combined his work in the field of radio communications with a love for electro-acoustics.  Because Schaeffer was the first to use and develop these studio music making methods he is considered a pioneer of electronic music, and one of the most influential musicians of the 20th century. These recording and sampling techniques which he was the first to use and practice are now part of the standard operating procedures used by nearly all record production companies around the world.  Schaeffer’s efforts and influence in this area earned him the title “Musician of Sounds.”

Schaeffer, born in 1910, had a wide variety of interests throughout his eighty-five years on this planet. He worked variously across the fields of composing, writing, broadcasting, engineering, and as a musicologist and acoustician. His work was innovative in science and art. It was after World War II that he developed musique concrète, all while continuing to write for essays, short novels, biographies and pieces for the radio. Much of his writing was geared towards the philosophy and theory of music, which he then later demonstrated in his compositions.

It is interesting to think of the influences on him as a person. Both his parents were musicians, his father a violinist, and his mother a singer, but they discouraged him from pursuing a career in music and instead pushed him into engineering. He studied at the the École Polytechnique where he received a diploma in radio broadcasting. He brought the perspective and approach of an engineer with his inborn musicality to bear on his various activities.

Schaeffer got his first telecommunications gig in 1934 is Strasbourg. The next year he got married and the couple had their first child before moving to Paris where he began work at Radiodiffusion Française (now called Radiodiffusion-Télévision Française, RTF). As he worked in broadcasting he started to drift away from his initial interests in telecommunications towards music. When these two sides met he really began to excel.
​
After convincing the management at the radio station of the alternate possibilities inherent in the audio and broadcast equipment, as well as the possibility of using records and phonographs as a means for making new music he started to experiment. He would records sounds to phonographs and speed them up, slow them down, play them backwards and run them through other audio processing devices, and mixing sounds together. While all this is just par for the course in today’s studios, it was the bleeding edge of innovation at the time.  
With these mastered he started to work with people he met via the RTF. All this experimentation had as a natural outgrowth a style that leant itself to the avant-garde of the day. The sounds he produced challenged the way music had been thought of and heard. With the use of his own and his colleagues engineering acumen new electronic instruments were made to expand on the initial processes in the audio lab, which eventually became formalized as the Club d’Essai, or Test Club. 
CLUB D’ESSAI

In 1942 Pierre founded the Studio d'Essai, later dubbed the Club d'Essai at RTF. The Club was active in the French resistance during World War II, later to become a center of musical activity. It started as an outgrowth of Schaeffer’s radiophonic explorations, but with a focus on being radio active in the Resistance on French radio. It was responsible for the first broadcasts to liberated Paris in August 1944. He was joined in the leadership of the Club by Jacques Copeau, the theatre director, producer, actor, and dramatist. 
It was at the Club where many of Schaeffer’s ideas were put to the test. After the war Schaeffer had written a paper that discussed questions about how sound recording creates a transformations in the perception of time, due to the ability to slow down and speed up sounds. The essay showed his grasp of sound manipulation techniques which were also demonstrated in his compositions.

In 1948 Schaeffer initiated a formal “research in to noises” at the Club d'Essai and on October 5th of that year presented the results of his experimentation at a concert given in Paris. Five works for phonograph (known collectively as Cinq études de bruits—Five Studies of Noises) including Etude violette (Study in Purple) and Etude aux chemins de fer (Study of the Railroads), were presented. This was the first flowering of the musique concrete style, and from the Club d’Essai another research group was born.
Picture
GRM: Groupe de Recherche de Musique Concrète

In 1949 another key figure in the development of Musique Concrète stepped onto the stage. By the time Pierre Henry met Pierre Schaeffer via Club d’Essai the twenty-one year percussionist-composer old had already been experimenting with sounds produced by various objects for six years. He was obsessed with the idea of integrating noise into music, and had already studied with the likes of Olivier Messiaen, Nadia Boulanger, and Félix Passerone at the Paris Conservatoire from 1938 to 1948.

For the next nine years he worked at the Club d'Essai studio at RTF. In 1950 he collaborated with Schaeffer on the piece Symphonie pour un homme seul. Two years later he scored the first musique concrète to appear in a commercial film, Astrologie ou le miroir de la vie. Henry remained a very active composer and scored for a number of other films and ballets.

Together the two Pierres were quite a pair and founded the Groupe de Recherche de Musique Concrète (GRMC) in 1951. This gave Schaeffer a new studio, which included a tape recorder. This was a significant development for him as he previously only worked with phonographs and turntables to produce music.  This sped up the work process, and also added a new dimension with the ability to cut up and splice tape in new arrangements, something not possible on a phonograph. Schaeffer is generally acknowledged as being the first composer to make music using magnetic tape.

Eventually Schaeffer had enough experimentation and material under his belt to publish À la Recherche d'une Musique Concrète ("In Search of a Concrete Music") in 1952, which was a summation of his working methods up to that point.

Schaeffer remained active in other aspects of music and radio throughout the ‘50s. In 1954 he co-founded Ocora, a music label and facility for training broadcast technicians. Ocora stood for the “Office de Coopération Radiophonique”. The purpose of the label was to preserve via recordings, rural soundscapes in Africa. Doing this kind of work also put Schaeffer at the forefront of field recording work, and in the preservation of traditional music. The training side of the operation helped get people trained to work with the African national broadcasting services. 

His last electronic noise etude was realized in 1959, the "Study of Objects" (Etudes aux Objets).
​
For Pierre Henry’s part, two years after leaving the RTF, he founded with Jean Baronnet the first private electronic studio in France, the Apsone-Cabasse Studio. Later Henry made a tribute to composing his Écho d'Orphée.
A CONCRETE LEGACY

usique remains concrete. Schaeffer had known of the “noise orchestras” of his predecessor Luigi Russolo, but took the concept of noise music and developed it further by making it clear that any and all sounds had a part to play in the vocabulary of music. He created the toolkit later experimenters took as a starting point. He was the original sampler. In all his work he emphasized the role of play, or jeu, in making music. His ide of jeu in music came from the French verb jouer. It shares the same dual meaning as the English word play. To play is to have two things at once: to make pleasing sounds or songs on a musical instrument, and to engage with things as way of enjoyment and recreation. Taking sounds and manipulating them, seeing what certain processes will do to them, is at the heart of discover and play inside the radiophonic laboratory. The ability to play opens up the mind to new possibilities.   
***

This article originally appeared in the April 2020 edition of the Q-Fiver.

If you enjoyed this article please consider reading the rest of the Radiophonic Laboratory series. 

0 Comments

Walking the Straight Edge

2/26/2020

0 Comments

 
Picture
On the bus ride home from work the other day I overheard an interesting conversation. Two guys were talking about their experiences in and out of prison, with the courts, with probation, with the criminal justice system in general. The two fellows talked about how the elevators at the justice center were broke for days on end, and how because the elevators were down, visitors weren’t allowed in. Not being able to see friends and family made their stay all the more miserable. As I sat there listening in I thought it sounded right on target, par for the course with societal collapse. As local governments lose funding for repair of public buildings, it makes sense that our jails might not be first on the list to get fixed.

One comment really stuck with me though. When the guy said he knew four dudes who OD’d on fentanyl while he was in the slammer, I wasn’t surprised, but I was shocked.

People on the street are dying from this stuff. Now it seems so are the people who get picked up off the street by the police and thrown into jail for possession. Now they can OD from the convenience of their jail cell. I guess those cavity searches aren’t going so well.

Being on the wrong side of the law hasn’t really been part of my experience. Unless you count the one trip I made to juvie for stealing cough syrup, or the time I got a slap on the wrist by a judge for some graffiti I got caught carving onto a picnic table at a park. Then there was the time I got a misdemeanor at age twenty-four when I contributed to the delinquency of a minor by buying my disabled, then nineteen year old cousin some booze. I hadn’t actually expected him to actually chug the rum. I panicked when he started falling out of his wheelchair due to being in a quick drunken stupor. I couldn’t handle the situation and had to call 911 for assistance. I did the wrong thing, then I did the right thing, and I got a hefty fine. My cousin and I are still real close, and he doesn’t blame me for the incident. I do accept the responsibility for the part I played. 

So unless you tally the times I’ve gotten caught breaking the law, I’ve been a law abiding citizen.

My own history with alcohol and drugs is rather checkered, as you might be able to guess from the incidents above. There were other ‘incidents’ if my addled memory serves me right. One thing I’m grateful for is that I never graduated to shooting up. Several of my close friends and some other cousins did when we were all at college together in the years around the turn of the millennium. Some of them are still in the throw of those addictions now, and one is homeless living on the streets of San Francisco. I remember being offered heroin with the caveat “We’ll shoot you up. We know what we are doing.” When I review that memory it’s one of the times I’m happy to suffer from anxiety because that was just one of the times when my neurotic fears have protected me from things so much worse.

But just because I didn’t shoot up doesn’t mean I didn’t do a bunch of other stupid shale, and waste a lot of time from age fourteen until I finally gave up alcohol and marijuana at age thirty-six. By that point they’d stopped working, and had been interfering in my life long enough. I didn’t hit rock bottom per se, but I hit a bottom, and was only compelled to quit when faced with a barrage of pain. It was one of the best choices I’ve ever made.

It’s kind of ironic that I took the path into drugs in the first place. When I was first getting into punk music I was in adamant opposition to all that. I blasted the hardcore sounds of the band Minor Threat on my Walkman, and I was influenced by their lyrics and by the mentorship of an older vegetarian Straight Edge punk who lived down the street. He turned me on to so much good music via his mixtapes. Around that time I claimed to be Straight Edge too.

Straight Edge is a philosophy that emerged from within the punk rock, hardcore and skateboarding subcultures whose adherents refrain from using alcohol, tobacco, and other recreational/non-prescribed drugs (marijuana, MDMA, LSD, cocaine, heroin, etc.). It has since broadened out of those specific spheres.

Peer pressure is a real thing though, whether subtle or overt, and soon I abandoned the philosophy and embarked on a program of what I thought was the expansion of consciousness through the systematic derangement of all the senses. Through all the years of drinking that followed, the idealism inherent within the Straight Edge philosophy was there in the back of my mind, as conscience that had been put on mute. All these years later as I return to the philosophy I find it still has much to offer our Western society plagued with rampant drug and alcohol abuse.    
The term Straight Edge itself came from a song of the same name by Minor Threat. The lyrics, full of the self-righteous vehemence of youth, remain just as powerful today as when they first wrote it in Washington D.C. in 1981.

“I'm a person just like you / but I've got better things to do / than sit around and fuck my head / hang out with the living dead / snort white shit up my nose / pass out at the shows / I don't even think about speed / that's just something I don't need / I've got the Straight Edge! / I'm a person just like you / but I've got better things to do / than sit around and smoke dope / 'cause I know that I can cope / laugh at the thought at eating ludes / laugh at the thought of sniffing glue / always gonna keep in touch / never gonna use a crutch / I've got the Straight Edge!”

The song launched a revolution. It was a reaction to the hedonism so often found within the punk scene. The Ramones had sang the polar opposite in their song : “Now I wanna sniff some glue / Now I wanna have somethin' to do / All the kids wanna sniff some glue / All the kids want somethin' to do.”

Of the many things punk rebelled against, boredom might be at the top of the list. One way to combat boredom is to take drugs to excess. This seemed to be especially true of those who had embraced the nihilism that also permeated the subculture. But not all punks thought seeking oblivion through the obliteration of consciousness was the best strategy for coping with their existential vexations. Some thought not taking drugs was the real rebellion. Some thought that not getting drunk and blitzed out of your mind was a more productive option. They did have something better to do than watch TV and have a couple of brews.

In the song Bottled Violence, Minor Threat took aim at violent drunks. “Get your bravery from a six pack / Get your bravery from a half-pint / Drink your whiskey, drink your grain / Bottoms up, and you don't feel pain / Drink your whiskey, drink your grain / Bottoms up, and you don't feel pain / Go out and fight, fight / Go out and fight, fight / Go out and fight, fight / Go out and fight, fight / Bottled violence / Lose control of your body / Beat the shit out of somebody / Half-shut eyes don't see who you hit / But you don't take any shit / Half-shut eyes don't see who you hit / But you don't take any shit.”

A Straight Edger preferred to develop their inner bravery. It came from resisting the allure of mindlessness that accompanied drinking and drugging.  It allowed them pursue other forms of meaning when they could have just accepted the status quo.

Though the core of the Straight Edge philosophy is to refrain from smoking, taking drugs, drinking alcohol, some took it further. They also included not indulging in casual sex, or eating meat as part of their lifestyle. Some even nixed caffeine, over-the-counter, and prescription drugs. For various people there were various gradations. For most of the people in the scene it wasn’t about telling other people what to do as much as it was about taking control of your own life. It remains a relevant strategy.

Control yourself, control your mind, and other people have a harder time controlling you.
Straight Edge is sometimes abbreviated as sXe and the X used as a symbol for the lifestyle.  
Journalist Michael Azerrad traced the use of the X symbol back to the band the Teen Idles. The D.C. group embarked on a brief West Coast tour in 1980. One of the gigs they were to play was at San Francisco's Mabuhay Gardens, an important stop for touring bands, and a venue where Frisco locals the Dead Kennedys often played. When the band showed up club management was alarmed to discover that they were actually still teens, or at least under the legal drinking age and technically weren’t supposed to even be in the club.

The management compromised, not wanting to lose out on whatever bit of money the young punks could help rake in, and besides they were already booked. As a way of showing the staff not to serve them any booze they marked each of the band members’ hands with a large black X. When the band came back home to D.C., they suggested the system to other local clubs and venues as a way to get teenagers in to see the bands without being served alcohol. This in turn sparked another movement within the punk scene where some bands, many of them hardcore or straight edge, would only play at “all ages” venues.
​
Later that year The Teen Idles released their Minor Disturbance album. On the cover were two hands with black Xs on the back. This album sealed the deal and the mark soon became associated with the Straight Edge lifestyle. The practice of marking the hands of underage kids with an X at clubs and music venues continued to spread around the country.
One of the members of Teen Idles happened to be a guy named Ian MacKaye, another was Jeff Nelson. They went on to form Minor Threat and from there the Straight Edge subculture continued to grow and evolve.

It is for all these reasons that the Straight Edge movement always gets traced back to Ian MacKaye, even if he is hardly the first person to have been an abstainer. The sentiment had been bubbling up in the scene but he gave it a name, and the symbol of the X that was also adopted. In an interview for the documentary Another State of Mind MacKaye said “When I became a punk, my main fight was against the people who were around me — friends".

 When he was 13 he had moved from D.C. to Palo Alto, California for nine months. When he came back home his friends had started drinking and drugging. He remarked, "I said, 'God, I don't want to be like these people, man. I don't fit in at all with them.' So it was an alternative." MacKaye also noted that the symbol "wasn't supposed to signify straight edge—it was supposed to signify kids. It was about being young punk rockers... it represents youth". In later years MacKaye has often spoke about how he never intended for Straight Edge to even be a movement, but the symbol X and the name stuck. People were inspired and it took on a life of its own. Perhaps Perhaps through the clarity of Straight Edge and clean living people can retain –or regain- some of the vibrancy of youth into adulthood.

The philosophy can be seen as a direct and practical response to the excess in the culture of the late 1970s and early 80s when it arose: cocaine, sleeping around, big spending. Sex wasn’t just getting your jollies off, but a connection to another person. Living without the filters and numbness and distortion imposed over the nervous system by drugs was a way to better connect with reality –and if you didn’t like the reality you found yourself in, you then had energy to go do something about, whether it was starting a band, making a ‘zine, creating a venue, or some form of direct action. Being Straight Edge was a path to meaningful activities for those who embraced the practice.

Looking at the 2020’s ahead of us and all the decades of industrial strength drug abuse behind us we still have the same problems. Only Fentanyl may be what is in the headlines now, instead of ludes, coke, crack or ecstasy. As a culture in systemic decline drug abuse is just one of the symptoms, and a temporary escape or refuge for those who would numb themselves against what is often a harsh reality. I can’t judge what another person chooses to do with their bodies. I know many people who are just social drinkers or weed smokers and I have no problem with it; I just don’t happen to be one myself. I’m also in favor of decriminalization and legalization. Prohibition causes more problems than it ever cured.
Yet I think there is a place within Green Wizardry for Straight Edge. The Green Wizard who is clean, or has gotten clean, will be better able to cope with life on its own terms. They also may be in a better position to help guide those neighbors, friends, or associates who happen to be suffering from an addiction, whether it comes from knowledge of twelve step recovery programs, or some other way of getting and staying sober.

Everyone needs an edge in life after all. As the economy overshoots sending citizens into free fall, as colleges continue to cater to corporations over true scholarship, as the environment undergoes permutations unknown to our eldest living relatives, it is necessary to sharpen whatever edge we have. If we wish to live conscious lives of volition, if we wish to have needs and desires met, and seek to bring dreams into reality in a world full of suffering, having our own edge will help us to stay positive. People who aren’t medicated or numbed are in a better position to use their willpower to do their work in the world, in spite or despite what everyone else is doing.

Straight Edge people have a lot more time on their hands. Free from chasing the next buzz or oblivion they have the energy to pursue plans that can impact their life and the lives of the people around them. This is very different from the fall out folks in the midst of substance abuse create in the wakes around them. These activities can provide purpose in the face of chaos and corruption.

At my last work location in the heart of downtown Cincinnati the number of out-of-work people, hanging out stoned, drunk at noon, some OD’ing from time-to-time in the public bathrooms, shows the degree of despair at play in America. This dispirited depression is egged on by an endless negative news cycle, and a seeming lack of choices in this land where too many choices is no choice at all. Instead of cultivating an edge for discomfort, it has been blunted by blunts, numbed by the latest craft brew or mass produced malt liquor, and anesthetized by opioids. On the other end of the drug spectrum are the crystal meth stimulants driving the brain into overdrive, chasing a cascade of conspirinoid thoughts that make even the most jaded netizen of conspiracy theory darkwebs look surprisingly sane.
In this liquid environment of binge eating and binge watching the latest reality reruns or sports spectacle an alternative exists: the Straight Edge and stoic alternative to sharpen the senses of the mind, body and soul in face of commodified decadence being shilled by the managerial class.

In 2013 MacKaye gave a talk at the Library of Congress. Speaking of his youth in the ‘70s he said, “In high school, I loved all my friends, but so many of them were just partying. It was disappointing that that was the only form of rebellion that they could come up with, which was self-destruction.”

Self-construction is the path offered by Straight Edge.

Within the larger punk subculture there was often a lot of open hostility directed towards Straight Edgers. Some of it was just brash reactions against people who came off as self-righteous, holier than thou, or even militant. I remember being made fun of when I had adopted it; and as I’ve been sober these past four years, having changed my habits and behavior, I have noticed the way some people treat me different than before. Going against the grain is a small price to pay for the many gains and transformations that have occurred from straightening my ways.

As Minor Threat sang in the song Out of Step “I don't smoke / I don't drink / I don't fuck / At least I can fucking think / I can't keep up! / I can't keep up! / I can't keep up! / Out of step with the world!”

Writing on the influence and legacy of the scene author Nina Renata Aron says, “ask anyone who came of age in the straight edge hardcore scene what it did for them, and they’re likely to tell you it saved their life. Those who’ve seen loved ones fall victim to addiction and its attendant miseries feel the scene spared them various forms of regret, anguish, or worse. More than that, it gave them something to believe in.”

Straight Edge is an antidote. It is Narcan for the individual soul in an overdosed society.   
REFERENCES:

Our Band Could Be Your Life: Scenes from the American Indie Underground 1981-1991, by Michael Azzerard, 2001

Straight edge: How one 46-second song started a 35-year movement by Nina Renata Aron

Curious how to be Straight Edge? Read this handy guide: How to Be Straight Edge

Read the rest of the Down Home Punk series.
Picture
0 Comments

The Crystal Psalms of Alvin Curran

2/23/2020

0 Comments

 
Picture
In 1988, the same year Negativland was pioneering the concept and practice of the Teletour, another maverick experimental music composer produced a radio concert like no other before or since. His name is Alvin Curran and the piece in question was his Crystal Psalms, a concerto for musicians in six European nations, simultaneously performed, mixed and broadcast live in stereo to listeners stretched from Palermo, Italy to Helsinki, Finland via six separate but synchronized radio stations.

The name of the radio concerto came from an event that Curran wanted to commemorate with the solemnness it was due; Kristallnacht otherwise known as Crystal Night or Night of the Broken Glass. It had happened fifty years before the broadcast on November 9th and 10th in Germany. This was the date of the November Pogroms when civilian and Nazi paramilitary forces mobbed the streets to attack Jewish people and their property. This horrendous event was dubbed Kristallnacht due to all the broken glass left on the ground after the windows of their stores, buildings and synagogues were smashed.

On Kristallnacht rioters destroyed 267 synagogues throughout Germany, Austria and the Sudetenland. They ransacked and set fire to homes, hospitals and schools. 30,000 Jewish men were rounded up and sent to concentration camps. This was the opening prelude before the sick opus of the Third Reich’s genocide. It was Hitler’s green light, ramping up his twisted plans. The Third Reich had moved on from economic, political and social persecution to physical violence and murder. The Holocaust had begun.

The year before the 50th anniversary of Kristallnacht a number of cultural and arts organization had begun making plans for a series of worldwide memorial events. Alvin Curran was in on some of these conversations. Curran had long been part of a vanguard group of ex-pat American composers living in Italy. He was also a founding member of the collective acoustic and electronic improvisation group Musica Elettronica Viva, sometimes known as a Million Electron Volts or simply MEV. They formed in Rome in 1966 and are still active today.

Started by three young Americans with Masters degrees in music composition from Yale and Princeton, MEV combined an Ivy-League classical pedigree with a tendency towards musical anarchism. Just as their music often involved chance operations, or the use of random procedures, the members of the group met by chance (or was it Providence?) on the banks of the Tiber River in Rome in 1965. Without scores, without conductors, they went like bold explorers into the primeval past of music, and its future. Curran says of the band, “….Composers all, nurtured in renowned ivy gardens; some mowed lawns. They met in Rome, near the Cloaca Maxima—and without further ado, began like experimental archeologists to reconstruct the origins of human music. They collected shards of every audible sound, they amplified the inaudible ones, they declared that any vibrating object was itself ‘music,’ they used electricity as a new musical space and cultural theory, they ultimately laid the groundwork for a new common practice. Every audible gurgle, sigh, thump, scratch, blast, every contrapuntal scrimmage, every wall of sound, every two-bit drone, life-threatening collision, heave of melodic reflux that pointed to unmediated liberation, wailing utopias, or other disappearing acts—anything in fact that hinted at the potential unity among all things, space, and times—were MEV’s ‘materia prima.’” 
Picture
Curran draws from this same ‘materia prima’ as a prolific musician and composer and by the 1980’s had an established solo career. At the time of this writing that solo career is now long and storied. Crystal Psalms is just one of his many innovative works. It is also just one of a number of pieces he created specifically for radio. To my knowledge it is the most technically complex of the pieces he has written for radio.

Crystal Psalms was unique in its conception and required hard dedicated work to pull off. Perhaps that is why these kind of radio events are rare. Of course their rarity could also be due to the lack of imagination on the part of the corporate media that dominates the airwaves. The project brought together over 300 people, including musicians and technicians, in six major European cities. These musicians and technicians, separated into groups at these six locations, could not see or hear what was happening at the other locations. Yet together they performed as a unified ensemble to realize Curran’s score. In commemorating a dark and destructive moment of human history Curran demonstrated our creative possibilities for international artistic and technological collaboration.

Curran organized the concert in the fall of 1987 at a meeting in Rome. The producers from each of the six radio stations were there. These included Danmarks Radio; Hessicher Rundfunk, Germany, ORF, Austria; Radio France; RAI, Italy; VPRO, Holland.  The RAI in Rome was chosen to be the main technical center, and HQ, probably due to the fact that this was the facility closest to the composer. Alvin wrote the music between May and September at his home in Poggidoro, about an hour drive outside the city.

The score was written for six groups of complementary ensembles –one group at each station in each country. These ensembles consisted of a mixed chorus (16-32 voices), a quartet of strings or winds, a percussionist and accordionist. Each of these six groups was conducted independent of each other. And even though they were separated by large distances in space, each of the ensembles played in time together. To accomplish this a recorded time track was heard by each conductor that kept them all synchronized.
​
Besides the live music, pre-recorded tapes were also used. These tapes were filled with the sounds of Jewish life. Among those heard was the ancient shofar (a ritual ram's horn that has been a mainstay in Curran’s music), recordings of the Yemenite Jews praying at Jerusalem’s Western Wall (the “Wailing” Wall). Other sounds on the tape included children from Roman Jewish orphanage, recordings of many famous Eastern European cantors sourced from various sound archives. Curran even included sounds from his family. He recorded his young niece singing her Bat Mitzvah prayers and his father singing in Yiddish at a family get-together. Birds, trains, and ship horns make appearances. But throughout it all is the sound of breaking glass. Meanwhile the live chorus is singing fragments from the Renaissance Jewish composers Salomone Rossi from Italy and another named Caceres from a famous Portuguese synagogue in Amsterdam. Curran also used choral fragments from versions of the Jewish liturgy composed Lewandowski and Sulzer in the 19th century.
Crystal Psalms is made up of two long sections, 24 minutes, and 29 minutes. tructured in two contiguous sections. In the first there is a ton of percussion created from fallen and thrown objects. Amidst all these heavy sounds he used an 18-voice polyphonic structure to weave an increasingly dense texture from the musical fragments being carried by each "voice". As these fragments repeat the weave is brought ever closer together.

In the second part elements from the pre-recorded tape are more apparent. It moves from one moment to the next, one location or place in time before jumping to something else. Curran says, “Here tonal chords are anchored to nothing, innocent children recite their lessons in the midst of raging international chaos.” Idling cars, Yiddish lullaby’s, are separated by glass breaking, and all undergirded by moments on the accordion, organ and fiddles. A familiar melody will quickly disappear when blasted by noise. A solemn choir sings amidst the sound of someone shuffling through the debris. Fog horns drift in and out as telephones go unanswered. The listener with an ear for classical music will recognize bits of Verdi’s “Va Pensiero” turned into a menacing loop. At the end of it all, the cawing of menacing crows, a murder of crows, who have come feed off the destruction.

Curran writes of his piece that “There is no guiding text other than the mysterious reccurring sounds of the Hebrew alphabet and the recitation of disconnected numbers in German, so the listeners, like the musicians, are left to navigate in a sea of structured disorder with nothing but blind faith and the clothes on their backs -- survivors of raw sonic history.”

The event of the radio broadcast was for Curran a very special moment. In creating it, this experience of human artistic and technological collaboration, existed for him alongside the memory of the inhuman pogrom memorialized on its 50th anniversary. Curran say, “By focusing on this almost incomprehensible moment in our recent history, I do not intend to offer yet another lesson on the Holocaust, but simply wish to make a clear personal musical statement and to solicit a conscious act of remembering -- remembering not only this moment of unparalleled human madness of fifty years ago, but of all crimes against humanity anywhere anytime.  Without remembering there is no learning; without learning no remembering.  And without remembering and learning there is no survival.”

The radio concert was a one off event, never to be performed live again. However recordings from each of the stations involved were made and in 1991 Alvin remixed these into an album. Writing about all of this I’m reminded of something the American folk-singer and storyteller Utah Philips said in regards to memory. “…the long memory is the most radical idea in this country. It is the loss of that long memory which deprives our people of that connective flow of thoughts and events that clarifies our vision, not of where we're going, but where we want to go.”

Let us remember then, the stories in history, personal or global, we would do well not to repeat and those other stories where people work together towards a common good. Just as this day is the product of all our past actions, so tomorrow will be built on what we do today.

Picture

Sources:
 https://en.wikipedia.org/wiki/Kristallnacht
http://www.alvincurran.com/writings/CrystalPsalmsnotes.html
https://nationalsawdust.org/event/mev-musica-elettronica-viva/
Crystal Psalms, New Albion records, 1994

This article originally appeared in the March issue of the Q-Fiver, the newsletter of the Oh-Ky-In Amateur Radio Society.

​Read the rest of the Radiophonic Laboratory series.
0 Comments
<<Previous
Forward>>

    Justin Patrick Moore

    Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.

    For shorter pieces,  announcements of JPM radio-activity, radio show downloads, publications,  catalogers book and music alerts, and sporadic dream infused rants follow Justin at sothismedias.dreamwidth.org.

    To listen to completed musical projects please visit sothismedias.bandcamp.com 

    Archives

    January 2021
    December 2020
    October 2020
    September 2020
    August 2020
    July 2020
    May 2020
    April 2020
    February 2020
    January 2020
    December 2019
    November 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019

    Categories

    All
    Down Home Punk
    Greeat American Eccentrics
    Radiophonic Laboratory

    RSS Feed

Powered by Create your own unique website with customizable templates.