One of the key researchers and musicians exploring the new frontiers of science and music at Bell Labs was Laurie Spiegel. She was already an accomplished musician when she started working with interactive compositional software on the computers at Bell between the years at the age of twenty-eight. The year was 1973.
Laurie brought her restless curiosity and ceaseless inquiry with her to Bell Labs. She was the kind of person who could see the creative potential in the new tools the facility was creating and make something timeless. Her skill and ability in doing so was something she had prepared herself for through a scholars devotion to musical practice and study.
She was interested in the stringed instruments, the ones you strums and pluck. She picked up guitar, banjo, and mandolin for starters and learned to play these all by ear in her teens. She excelled in High School and was able to graduate early and get a jump start on a more refined education. Shimer College had an early entrance program and she made the cut. With Shimer as a launching board she got into their study abroad program and left her native Chicago to join the scholars at Oxford University. While pursuing her degree in Social Sciences she decided she better teach herself Western music notation. It was essential if she was to start writing down her own compositions. She managed to stay on at Oxford for an additional year after her undergraduate was completed. In between classes she would commute to London for lessons with composer and guitarist John W. Durante who fleshed out her musical theory and composition.
She was no slacker.
Her devotion to music continued to flourish when she came back to the states. In New York she worked briefly on documentary films in the field of social science, but the drive to compose music pushed her back onto the path of continuing education. So she headed back to school again, at Juilliard, going for a Masters in Composition. Hall Overton, Emmanuel Ghent and Vincent Perischetti were some of her teachers between 1969 and 1972. Jacob Druckman was another and she ended up becoming his assistant and ended following him to Brooklyn College. While there she also managed to find some time to research early American music under H. Wiley Hitchock before completing her MA in 1975.
Laurie was no stranger to work, and to making the necessary sacrifices so she could achieve her aims and full artistic potential. Laurie’s thinking is multidimensional, and her art multidisciplinary. Working with moving images was a natural extension of her musicality. She supported herself in the 70s in part through soundtrack composition at Spectra Films, Valkhn Films, and the Experimental TV Lab at WNET (PBS). TV Lab provided artists with equipment to produce video pieces through an artist-in-residence program. Laurie held that position in 1976 and composed series music for the TV Lab's weekly "VTR—Video and Television Review". She also did the audio sound effects for director David Loxton’s SF film The Lathe of Heaven, based on the novel by Ursula K. Leguin, and produced for PBS by WNET.
Speaking of the Experimental TV Lab she said, "They had video artists doing really amazing stuff with abstract video and image processing. It was totally different from conventional animation of the hand-drawn or stop-motion action kind. Video was much more fluid and musical as a form."
Going to school and scoring for film and television wasn’t enough to satisfy Laurie’s endless inquisitive curiosity. Besides playing guitar, she’d been working with analog modular instruments by Buchla, Electrocomp, Moog and Ionic/Putney. After a few years of experimentation she outgrew these synths and started seeking something that had the control of logic and a larger capacity for memory. This led Laurie to the work being done with computers and music at Bell Labs in Murray Hill. At first she was a resident visitor at Bell Labs, someone who got the privilege of working and researching there, but not the privilege of being on Ma Bell’s payroll.
Laurie had already been playing the ALICE machine when the Bell Telephone Company needed to film someone playing it for the 50th anniversary of the Jazz Singer. She had already become something of a fixture at Murray Hill so the company hired her as a musician. Not that the engineers at Bell who created the musical instruments were unmusical, but they were engineers. Laurie had the necessary background as a composer and the interest in how technology could open up to musical expression she was the perfect fit.
In 1973 while still working on her Masters she started getting her GROOVE on at Bell Labs, using the system developed by Max Mathews and Richard Moore.
GROOVE was to prove the perfect foil for expressing Spiegel’s creative ideas. While Max Mathews was bouncing around between a dozen different departments, Laurie was getting her GROOVE on at Murray Hill.
In the liner notes to the reissue of her Expanding Universe album created with GROOVE she wrote, “Realtime interaction with sound and interactive sonic processes were major factors that I had fallen in love with in electronic music (as well as the sounds themselves of course), so non-realtime computer music didn’t attract me. The digital audio medium had both of the characteristics I so much wanted, But it was not yet possible to do much at all in real time with digital sound. People using Max’s Music V were inputting their data, leaving the computer running over the weekend, and coming back Monday to get their 30 seconds of audio out of the buffer. I just didn’t want to work that way.
But GROOVE was different. It was exactly what I was looking for. Instead of calculating actual audio signal, GROOVE calculated only control voltage data, a much lighter computational load. That the computer was not responsible for creating the audio signal made it possible for a person to interact with arbitrarily complex computer-software-based logic in real time while listening to the actual musical output. And it was possible to save both the software and the computed time functions to disk and resume work where we left off, instead of having to start all over from scratch every time or being limited to analog tape editing techniques ex post facto of creating the sounds in a locked state on tape.”
RECORD IN A BOTTLE
Laurie’s most famous work is also the one most likely to be heard by space aliens. It was a realization of Johannes Kepler’s Harmonices Mundi using the GROOVE system and was the first track featured on the golden phonograph records placed aboard the Voyager spacecrafts launched in 1977. The records contain sounds and images intended to portray the vast diversity of life and culture on planet Earth. The records form a kind of time capsule, a message in a bottle sent off into interstellar space.
Carl Sagan chaired the committee that determined what contents should be put on the record. He said “The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space, but the launching of this 'bottle' into the cosmic 'ocean' says something very hopeful about life on this planet."
A message in a bottle isn’t the most efficient way of communicating if your purpose is to reach a specific person in short amount of time. If however, you trust in fate or providence and the natural waves of the ocean, to guide the message to whomever it is meant to be received by, it can be oracular.
Like many musicians before her Laurie had been fascinated by the Pythagorean dream of a music of the spheres. When she set about to realize Kepler’s 17th century speculative composition, she had no idea her music would actually be traveling through the spheres. Kepler’s Harmonices Mundi was based on the varying speeds of orbit of the planets around the sun. He wanted to be able to hear “the celestial music that only God could hear” as Spiegel said.
"Kepler had written down his instructions but it had not been possible to actually turn it into sound at that time. But now we had the technology. So I programmed the astronomical data into the computer, told it how to play it, and it just ran."
The resulting sounds aren’t the kind of thing you’d typically put on your turntable after getting home from a hectic day to relax. The sounds are actually kind of agitating. Yet if you listen to the piece as the product of a mathematical and philosophical exercise it can still be enjoyable.
Other sounds that can be heard on the Voyager Golden Records include spoken greetings from Earth-people in fifty-five languages, Johnny B Goode by Chuck Berry, Melancholy Blues by Louis Armstrong, and music from all around the world, from folk to classical. Each record is encased in a protective aluminum jacket, and includes a cartridge and a needle for the aliens. Symbolic instructions, kind of like those for building a piece of furniture from Ikea, show the origin of the spacecraft and indicate how the record is to be played. In addition to the music and sounds there are 115 images are encoded in analog form.
Laurie was in Woodstock, New York when she received a phone call requesting the use of her music for the record. “I was sitting with some friends in Woodstock when a telephone call was forwarded to me from someone who claimed to be from NASA, and who wanted to use a piece of my music to contact extraterrestrial life. I said, 'C'mon, if you're for real you better send the request to me through the mail on official NASA letterhead!'”
It turned out to be the real deal and not just a prank on a musician.
In 2012 Voyager I entered Interstellar Space. And it’s till out there running, sending back information. Laurie says, “It's extremely heartening to think that our species, with all its faults, is capable of that level of technical operation. We're talking Apple II level technology, but nobody's had to go out there and reboot them once!"
AN EXPANDING UNIVERSE
Laurie explored many other ideas within the structure of the highly adaptable GROOVE system, taking naps in the Bell Labs anechoic chamber, when she needed a rest during the frequent all-nighters she pulled to get her work out into the world.
But getting them into a fashion fit for a golden record, or more common earthbound vinyl, was not easy. The results however were worth the effort of working with a system that took up space in multiple rooms.
“Down a long hallway from the computer room …was the analog room, Max Mathew’s lab, room 2D-562. That room was connected to the computer room by a group of trunk cables, each about 300 feet long, that carried the digital output of the computer to the analog equipment to control it and returned the analog sounds to the computer room so we could hear what we were doing in real time. The analog room contained 3 reel-to-reel 1/4” two-track tape recorders, a set of analog synthesizer modules including voltage-controllable lab oscillators (each about the size of a freestanding shoe box), and various oscillators and filters and voltage-controllable amplifiers that Max Mathews had built or acquired. There was also an anechoic sound booth, meant for recording, but we often took naps there during all-nighters. Max’s workbench would invariably have projects he was working on on it, a new audio filter, a 4-dimensional joystick, experimental circuits for his latest electric violin project, that kind of stuff.
Because of the distance between the 2 rooms that comprised the GROOVE digital-analog-hybrid system, it was never possible to have hands-on access to any analog synthesis equipment while running the computer and interacting with its input devices. The computer sent data for 14 control voltages down to the analog lab over 14 of the long trunk lines. After running it through 14 digital-to-analog converters (which we each somehow chose to calibrate differently), we would set up a patch in the analog room’s patch bay, then go back to the computer room and the software we wrote would send data down the cables to the analog room to be used in the analog patch. Many many long walks between those two rooms were typically part of the process of developing a new patch that integrated well with the controlling computer software we were writing.
So how was it possible to record a piece with those rooms so far apart? We were able to store the time functions we computed on an incredibly state-of-the-art washing-machine-sized disk drive that could hold up to a whopping 2,400,000 words of computer data, and to store even more data on a 75 ips computer tape drive. When ready to record, we could walk down and disconnect the sampling rate oscillator at the analog lab end, walk back and start the playback of the time functions in the computer room, then go back to the analog lab, get our reel-to-reel deck physically patched in, threaded or rewound, put into record mode and started running. Then we’d reconnect the sampling rate oscillator, which would start the time functions actually playing back from the disk drive in the other room, and then the piece would be recorded onto audio tape.”
Every piece on her album, The Expanding Universe, was recorded at Bell Labs. She computed in real time the envelopes for individual notes, how they were placed in the stereo field and their pitches. “Above the level of mere parameters of sound were more abstract variables, probability curves, number sequence generators, ordered arrays, specified period function generators, and other such musical parameters as were not, at the time, available to composers on any other means of making music in real time.”
Computer musicians today who are used to working with programs like Reaktor, Pure Data, Max/MSP, Ableton, Supercollider and a slew of others take for granted the ability to manipulate the sound as it is being made, on the fly, and with a laptop. Back then it was state of the art to be able to do these things, but doing it required huge efforts, and took up a lot of space.
During the height of the progressive rock music era, making music with computers was also risky business on the level of personal politics. Computers weren’t seen in a positive light. They were the tool of the Establishment, man. Used for calculating the path of nuclear missiles and storing your data in an Orwellian nightmare. Musicians who chose to work with technology were often despised at this time. There was an attitude that you were succeeding your creative humanity to a cold dead machine. “Back then we were most commonly accused of attempting to completely dehumanize the arts,” she said. This macho prog rock tenor haunted Laurie, despite her being an accomplished classical guitarist, and capable of shredding endless riffs on an electrified axe if she chose to.
She also took risks in her compositions inside the avant-garde circles she frequented. Her music is full of harmony when dissonance was all the rage. “It wasn’t really considered cool to write tonal music,” she said, speaking of the power structures at play in music school. All I know is that it’s a good thing she listened to the music she had inside of her.
Between 1974-79 Laurie got the idea that GROOVE could be used to create video art with just a little tweaking of the system. Unlike the hours of music released on her Expanding Universe album, her video work at Bell didn’t get the documentation it deserved. This was in part due to the systems early demise. Hardware changes at the lab prevented many records and tracings from being left behind.
VAMPIRE however is still worth mentioning. It stands for Video And Music Program for Interactive Realtime Exploration/Experimentation. Laurie was able to turn GROOVE into a VAMPIRE with the help of computer graphics pioneer Ken Knowlton. Ken was also an artist and a researcher in the field of evolutionary algorithms, something else Laurie would later take up and apply to music. In the 60’s Knowlton had created BEFLIX (Bell Flicks), a programming language for bitmap computer-produced movies. After Laurie got to know him they soon started collaborating together. It was another avenue for her to pursue her ideas for making musical structures visible.
Laurie had reasoned that if computer logic and languages had made it possible to interact with sound in real time, than the GROOVE system should be powerful enough to handle the real time manipulation of graphics and imagery. She started working on this theory first using a program called RTV (Real Time Video) and a routine given to her by Ken. She wrote a drawing program, now similar to what would be called Paint. It became the basis on which VAMPIRE was built.
With Ken she worked out a routine for a palette of 64 definable bitmap textures. These could be used as brushes, alphabet letters, or other images. This was used inside of a box with 10 columns, each column having 12 buttons representing a bit that could be on or off. This is how she entered the visual patterns.
In addition to weaving strands of sound Laurie was also a hand weaver. Cards with small holes in them have often been used over the years as one approach to the art form. Card weaving is a way to create patterned woven bands, both beautiful and sturdy. Some may think the cards are a simple tool, but they can produce weavings of infinite design and complexity. Hand weaving cards are made out of cardboard or cardstock, with holes in them for the threads, very similar to the Hollerith punch cards used for programming computers. She struck upon the idea that she could create punch cards to enter batches of patterns via the card reader on the computer. After she consulted some of her weaving books she made a large deck of the cards to be able to shuffle and input into the system.
Laurie quickly found that she enjoyed playing the drawing parameters just like someone would play a musical instrument. Instead of changing pitch, duration, timbre she could change the size, color and texture of an image, as she drew it in real time with switches and knobs making it appear on the monitor. Her skills as a guitarist directly translated to this ability. One hand would do the drawing. Perhaps it was the same as did the strumming and plucking of the strings. The other hand would change the parameters of the image using a joystick, and the other tools, just as it might change chords on one of her lutes, banjos or mandolins.
She saw the objects on the screen as melodies, but it was just one line of music. She wanted more lines as counterpoint was her favorite musical form. She wanted to be able to multiple strands of images together. She wrote into the program another realtime device to interact with. This was a square box of 16 buttons for typical contrapuntal options as applied to images. This gave her a considerable expansion of options and variables to play with.
After all this work she eventually hit a wall of what she could achieve with VAMPIRE in terms of improvisation. “The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.” It had reached the point where she needed to think of composition.
Ken Knowlton’s work with algorithms was beginning to rub off on her and she started to think of how “powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to (‘overdubbed’ or ‘multitracked’), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.”
Ever the night owl Laurie said of her work with the system, “Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.”
ECHOES OF THE BELL
The echoes of Laurie’s time spent at Bell Laboratories can be found in the work she has done since then, even as she was devastated by the death of GROOVE and VAMPIRE.
She went on to write the Music Mouse software in 1986 for Macintonsh, Amiga and Atari computers and also founded the New York University Computer Music Studio. She has continued to write about music for many journals and publications and has continued to compose. Laurie has applied her knowledge of algorithmic composition and information theory into her work.
Now the tools for making computer music can be owned by many people and used in their own home studios, but the echo of the Bell is still heard.
This article only scratches the surface of Laurie's life and work. A whole book could be written about her, and I hope someone will.
The liner notes to the 2012 reissue of Expanding Universe
Read the rest of the Radiophonic Laboratory series.
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.
Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey.
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.
Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation.
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.
In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University. There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
Read the rest of the Radiophonic Laboratory series.
One of the worst symphony orchestras ever to have existed in the world now gets the respect it is due in a retrospective book published by Soberscove Press, collecting the memories, memorabilia and photographs of its talented members. The Worlds Worst: A Guide to the Portsmouth Sinfonia, edited by Christopher M. Reeves and Aaron Walker, though long overdue, has arrived just in time.
For those unfamiliar with the Portsmouth Sinfonia, here is the cliff notes version: founded by a group of students at the Portsmouth School of Art in England 1970 this “scratch” orchestra was generally open to anyone who wanted to play and ended up drawing art students who liked music but had no musical training or, if they were actual musicians, they had to choose and play an instrument that was entirely new to them. One of the other limits or rules they set up was to only play compositions that would be recognizable even to those who weren’t classical music buffs. The William Tell Overture being one example, Bheetoven’s Fifth Symphony and Also Sprach Zarathustra being others. Their job was to play the popular classics, and to do it as amateurs. English composer Gavin Bryars was one of their founding members. The Sinfonia started off as a tongue-in-cheek performance art ensemble but quickly took on a life of its own, becoming a cultural touchstone over the decade of its existence, with concerts, albums, and a hit single on the charts.
The book has arrived just in time because one of the lenses the work of the Portsmouth Sinfonia can be viewed through is that of populism; and now, when the people and politics on this planet have seen a resurgence of populist movements, the music of the Portsmouth Sinfonia can be recalled, reviewed, reassessed and their accomplishments given a wider renown.
One way to think of populism is as the opposite and antithesis of elitism. I have to say I agree with noted essayist John Michael Greer and his frequent tagline that “the opposite of one bad idea is usually another bad idea”. Populism may not be the answer to the worlds struggle against elitism, yet it is a reaction, knee jerk as it may be. Anyone who hasn’t been blind-sighted by the bourgeois will know the soi-distant have long looked down on those they deem lesser than with an upturned nose and sneer. Many of those sneering people have season tickets to their local symphony orchestra. They may not go because they are music lovers, but because it is a signifier of their class and social status. As much as the harmonious chords played under the guidance of the conductors swiftly moving baton induce in the listener a state of beatific rapture, there is on the other hand, the very idea that attending an orchestral concert puts one at the height of snobbery. After all, orchestral music is not for everyone, as ticket prices ensure.
The Portsmouth Sinfonia was a remedy to all that. It put classical music back into the hands and mouthpieces, of the people. It brought a sense of lightheartedness and irreverence into the stuffy halls that were so often filled with dour, stuffy, serious people listening in such a serious way to so serious music. The Porstmouth Sinfonia made the symphony fun again, and showed that the canon of the classics shouldn’t just be left to the experts. Musical virtue wasn’t just for virtuosos, but could be celebrated by anyone who was sincere in their love of play.
Still the Sinfonia was also more than that. It was an incubator for creative musicians and a doorway from which they could launch and explore what composer Alvin Curran has called the “new common Practice”, that grab bag of twentieth century compositional tools, tricks, and approaches, from the seriality of Schoneberg to the madcap tomfoolery of Fluxus. This book shows some of these explorations through the voices of the members of the Sinfonia as they recollect their ten year experiment at playing, and being playful with, the classical hits of the ages.
As Brian Eno noted in the liner notes to Portsmouth Sinfonia Plays the Popular Classics, essential reading that is provided in the book, “many of the more significant contributions to rock music, and to a lesser extent avant-garde music, have been made by enthusiastic amateurs and dabblers. Their strength is that they are able to approach the task of music making without previously acquired solutions and without a too firm concept of what is and what is not musically possible.” Thus they have not been brainwashed, I mean trained, to the strict standards and world view of the classical career musician.
Gavin Bryars, who was another founding member of the orchestra speaks to this in an interview with Michael Nyman, also included in the book. He said, “Musical training is geared to seeing your output in the light of music history.” Such training is what can make the job of the classical musician stressful and stifling. Stressful because of the degree of perfection players are required to achieve, and stifling because deviation, creative or otherwise, is disavowed and un-allowed. I’m reminded of how Karlheinz Stockhausen, when exploring improvisation and intuitive music had to work really hard at un-training his classically trained ensemble of musicians in the matter of being freed from the score.
The amateurs in the Portsmouth Sinfonia were free from the weight of musical history. If a wrong note was played, and many were, they could just get on with it, and let it be. This created performances full of humor and happy accidents even as they tried render the music correct as notated.
Training and discipline in music give can give a kind of perfectionists freedom as it relates to playing with total accuracy, but takes that freedom away when it comes to experimenting and exploration. Under the strictures of the conductor’s baton, playing in the symphony seems to be more about taking marching orders from a dictator than playing equally with a group of fellow musicians. John Farley, who took on the role of conductor within the Sinfonia, held his baton lightly. He wasn’t so much telling the other musicians how to play, or even keeping time, but acting out the part of what an audience expects of a conductor, acting as something of a foil for the musicians he was collaborating with in the performance.
One of the essential texts included in this book is “Collaborative Work at Portsmouth” written by Jeffrey Steele in 1976. His piece shows how the Sinfonia really grow out of social concerns and looking at new ways to work together. Steele’s essay allies itself from the start with the constructivist movement of art, which he had been involved with as a painter. Constructivism was more concerned with the use of art in practical and social contexts. Associated with socialism and the Russian avant-garde, it took a steely eyed look at mysticism and the spiritual content so often found in painting and music, on the one hand, and the academicism music can degenerate into on the other. The Portsmouth Sinfonia coalesced in a dialectical resolution between these two tendencies. Again, the opposite of one bad idea is usually another. The Sinfonia bypassed these binary oppositions to create a third pole.
A version of Steele’s essay was originally supposed to be included in an issue of Christopher Hobbs Experimental Musical Catalogue (EMC). A “Porstmouth Anthology” had been planned as an issue of the Catalogue, and a dummy of the publication even made, but that edition of EMC never came out. It has been rescued here in this book. Other rescued bits include a selection of correspondence.
Besides the populist implications, and the permission given to enthusiastic amateurs to take center stage, the book explores the ideas, philosophies and development of the various artists and musicians who made up the Sinfonia itself in the recollections section of the book where Ian Southwood, David Saunders, Suzette Worden, Robin Mortimore and the groups manager and publicist Martin Lewis all reflect on their time as members. Reading these you get the sense that the whole thing was a real community effort, a collaborative effort where everyone had a role and took initiative in whatever ways they could.
A long essay by Christopher M. Reeves, one of the editors of the book, puts the whole project into historical and critical context. Reeves writes of their “transition from intellectual deconstrunction to punchline symphony is a trajectory in art that has little precedent, and points to a more general tendency in the arts throughout the 1970s, in the move from commenting or critiquing dominant culture, to becoming subordinate to it.” His essay goes from the groups origins as a cross-disciplinary adventure to their eventual appropriation by the mainstream as a kind of novelty music you might here on an episode of Dr. Demento’s radio show.
Just how serious was the Sinfonia supposed to be taken?
Reeve’s puts it thus, “It is within this question that the Sinfonia found a sandbox, muddying up the distinctions between seriousness and goofing off, intellectual exercises and pithy one liners.” The Sinfonia’s last album was titled Classical Muddly. The waters left behind by them are still full of silt and only partially clear. This book does a good job at straining their efforts through a sieve and presenting the reader with the material and textual ephemera the group left behind, all in a beautifully made tome that is itself a showcase of the collaborative spirit found in the Portsmouth Sinfonia.
Robert Mortimore had told Melody Maker’s Steve Lake in 1974, “The Sinfonia came about partly as a reaction against Cardew [and his similar Scratch Orchestra]. He had the classical training and his audience was very elitist. But he wasn’t achieving anything. We listened, thought, ‘well, why don’t we have a go, it can’t be all that difficult. Y’know if Benjamin Britten and Sir Adrian Boult can do it, why can’t we?”
In this time when so many artistic and musical institutions are underfunded, the Portsmouth Sinfonia can serve as a model. By having trained musicians play instruments they did not originally know how to play, and by having untrained musicians pick an instrument and be a part of an ensemble, they showed that with diligence anyone can bring the western canon of classical music to life, and often do it with much more humor and life than can be heard in contemporary concert halls.
Just maybe people are tired of being told how to think and what to do. Or how to play an instrument, and what “good” music should be played on that instrument. The Worlds Worst is a reminder of the inspiring example of the Portsmouth Sinfonia, and the accomplishments that can be made when amateurs and in-experts take to the world’s stage and have fun making a raid on the western classical canon, wrong notes and all.
The Worlds Worst: A Guide to the Portsmouth Sinfonia edited by Christopher M. Reeves and Aaron Walker is available from Soberscove Press.
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.
Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.
Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.
It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s. “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”
Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.
Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.
That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.
ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.
The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.
Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.
Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.
Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977.
In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.
A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence.
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.
Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.
The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.
Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s.
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.
Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE. The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article.
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
Read the other articles in the Radiophonic Laboratory series.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.