Not all the musicians who use radios to make music take the output from the transmission directly into the input of the mixing board or microphone to capture the voice of the aether. And not all of them used it as a source of direct audio sampling either. Some have trawled the megahertz and found inspiration in the voices they heard on the radio talk in shows, in the banter to be heard on the citizens band, and in the back and forth between hams in long distance rag chews over the shortwaves. Paddy McAloon found so much inspiration listening to the radio, he created an entire album and the based the lyrical elements off of the various conversations he had heard and taped at his listening post. Paddy had been writing songs since he turned 13, but in 1999 at the age of 42, the ease with which he could write songs suddenly changed. Not from the level of his mastery of melody, hooks, and poetic pop lyricism, but on a physical level, when he suffered from the detachment of both his retinas one right after the other. Suddenly blind, he was bound to the house with nothing but free time. Not only had Paddy been writing songs since he was 13, but he’d been in the habit of making chart topping albums with his band Prefab Sprout, started with his brother Martin, in Witton Gilbert, County Durham, England. The band played in a down on its heels gas station owned by their father, and were joined by a friend down the street, Michael Salmon drums, forming in 1977. Five years later after forging some musical chops they went into the studio to record their first single, Lions in My Own Garden (Exit Someone) with a b-side called Radio Love. The lyrics are seemingly innocuous but hide a tragic undercurrent, and it’s hard not to read an eerie prescience into the tune for Paddy’s later album Trawling the Megahertz. It starts with the static and whine of a shortwave set, and ends with the same, and the voice of a distant announcer. “Requests for everyone / Love is on / Radio love is strong / Radio love / Shortwave for everyone / It was on the news, someone had drowned / She keeps hearing it over / All night long / All night long”. All the years spent listening to everything from David Bowie, to Igor Stravinsky, to T. Rex, put the band in good stead as Paddy continued to refine his craft of songwriting. Having written most of his songs using guitar Paddy had a crisis around the instrument, thinking he’d exhausted it, picked up a Roland synth, and started using that to write songs with just as they were poised to start making albums. It was around this time that vocalist Wendy Smith was recruited for the band. In 1984 came Swoon followed by 85’s Steve McQueen. This was followed by another string of album leading to Jordan in 1990. The band then went on hiatus until work began on Andromeda Heights, the last album to feature Wendy as vocalist. It was released in 1997. Two years later Paddy’s retina detached, possibly from congenital factors. Repairing his eyes required extensive surgery and he was left blind and stuck in the house. Composing hunched over the keyboard had become impossible, and he was starting to twitchy, unable to work on new songs, and unable to read. Radio became his solace. “I found all this frustating as I've been writing songs since 1971, and am subject to itchy, unpleasant withdrawal symptoms if I cannot work. So, unable even to read, I passed the time by listening to and taping all kinds of T.V and radio programmes, concentrating on phone-ins, chat shows, citizen's band conversations, military encryptions - you name it, I was eavesdropping on it.” McAloon found a lot of what he taped to be boring and banal, but within all the day to day chit chat of people talking on the air, he caught glimpses of the sublime, and started having moments of inspiration. In his mind he began to edit what he had heard into the spoken word lyrics for what would become his next album. "Odd words from documentaries would cross-pollinate with melancholy confidences aired on late night phone-ins; phrases that originated in different time zones on different frequencies would team up to make new and oddly affecting sentences. And I would change details to protect the innocent (or guilty), to streamline the story that I could hear emerging, and to make it all more...musical, I suppose." Using the snippets of radio conversation he had recorded, and further riffing off "mental edits" he’d made of these, he found the poetic moments within the plaintive complaints he heard on the radio and mixed these with things he had heard on various documentaries. A specific word like "ether" or "anesthetic" would strike him, and he started using these as launch points for his own writing. All the radio transmissions had been like a fertilizer, seeding his imagination. “After awhile I got enough of these sentences and radio thoughts, and I thought, well, I’m not going to be able to finish the thought by listening to radio to find the words I need, so sometimes I’ll fill them in.” He started writing musical parts to go with the words on his 1987 era Atari computer. Paddy had developed a philosophy of not wanting to use all the latest gear. “You find a piece of software you can use, you do it well, and then someone will tell you the computer you've got will break down, it's old now, you'll need to go over to a Mac. Let me tell you - I still use an Atari computer from 1987. I didn't like where the software went after that. Even on the Mac. I don't care how sophisticated it got - I knew how to use the old software in my limited way. And, finally, my eyes are not great. So I resent the learning curve with new equipment. I don't have Garage Band. I don't have a Mac. That’s what it is with me and old technology. I can't be bothered. Nor do I have the money to spend in the way I used to have. I don't have a massive guaranteed advance from a record company. I work very slowly by myself. BUT - I have a message on my studio wall that says: ‘Imagine that you crash landed on a desert island, but you've survived, you've walked away, and there's a small town there, with a recording studio, the recording studio is very old-fashioned. How thrilled would you be, having survived your plane crash and how thrilled you'd be for the most basic recording equipment?’ That's me. That's me in my home studio full of this old gear that's out of date that other people can laugh at.” Working with the Atari computer to compose the title track on I Trawl the Megahertz, the limitations of the software gave the piece a form to materialize within and determined the length of the title track. “I spent a long time working on that just as a computer piece, using the same old rubbishy synth sounds. Do you know why it is as long as it is? This is a terrible thing to tell you! 22 minutes of music is the length you'll get on an Atari! That's a bad reason for it. But in the end when I figured out the structure of it was just gonna fall within what an Atari could do.” The piece ends up being something of a movie to watch with your eyes closed, a narrative to listen to if you have been left without sight. Culled from the airwaves, it is also perfect piece to be played on the radio. While Paddy is mostly known for his pop songs, this long player of a track, is in a way akin to the kind of storytelling heard in the music Laurie Anderson and in the operas of Robert Ashley. It is so perfectly suited for transmission itself. While not a radio drama, it can be listened to as a radio drama, these kind of works could form the basis for revivification of radio drama, infused with specially composed music, and a delight to people to near and far, who happen to tune, out of the blue and right on schedule. And though written on the Atari, the album proper ended up being recorded with a classical crossover ensemble, Mr. McFalls Chamber. Co-producer Calum Malcolm and composer David McGuinness helped Paddy to take his original MIDI versions and produce scores from them for the final recordings. The final result is an breathtaking excursion into neo-romantic chamber pop. Echoes of Claude Debussy, Maurice Ravel, and Leonard Bernstein swirl and coalesce with the tender reading of his poetic text by vocalist Yvonne Connors. On the second side there are eight more tracks, mostly instrumental. I’m 49 is the only one to use samples of the actual recordings he’d made off the air to deliver a melancholic meditation on one man’s post-divorce mid-life crisis. At a time when Paddy had been suffering from the trials and travails of his own life, and the curveballs it had thrown at him, he plumbed the depths of our shared human condition, and found companionship and comfort in the voices that called out to him across the expansive aether. Special thanks to One Deck Pete for reminding me of this story.
Read the rest of the RADIOPHONIC LABORATORY series. REFERENCES: Paddy McAloon, I Trawl the Megahertz , Liberty EMI, 2003 Prefab Sprout, I Trawl the Megahertz (Remastered), Sony Music 2019 https://www.theguardian.com/culture/2020/jun/30/paddy-mcaloon-thomas-dolby-how-we-made-steve-mcqueen-album https://www.hotpress.com/music/interview-prefab-spout-paddy-mcaloon-trawl-megahertz-tales-22809556 https://www.irishtimes.com/culture/music/prefab-sprout-s-paddy-mcaloon-like-gandalf-on-his-way-to-work-in-the-house-of-lords-1.3765658 https://archive.org/details/PaddyMcAloonPaddyMcAloonITrawlTheMegahertzInterview http://www.hanspeterkuenzler.com/paddy-mcaloon.html
0 Comments
Another way Information Theory has been used in the making of music is through the sonification of data. It is the audio equivalent of visualizing data as charts, graphs, and connected plot points on maps full of numbers. Audio, here meaning those sounds that fall outside of speech categories, has a variety of advantages to other forms of conveying information. The spatial, tempo, frequency and amplitude aspects of sound can all be used to relay different messages. One of the earliest and most successful tools to use sonification has been the Geiger counter from 1908. Its sharp clicks alert the user to the level of radiation in an area and are familiar with anyone who is a fan of post-apocalyptic sci-fi zombie movies. The faster the tempo and number of clicks the higher the amount of radiation detected in an area. A few years after the Geiger counter was invented Dr. Edmund Fournier d'Albe came up with the optophone, a system that used photosensors to detect black printed typeface and convert it into sound. Designed to be used by blind people for reading, the optophone played a set of group notes: g c' d' e' g' b' c. The notes corresponded with positions on the reading area of the device and a note was silenced if black ink was sensed. These missing notes showed the positions where the black ink was and in this way a user could learn to read a text via sound. Though it was a genius invention the optophone didn’t catch on. Other areas where sonification did get used include pulse oximeters (a device that measures oxygen saturation in the blood), sonar, and auditory displays inside aircraft cockpits, among others. In 1974 a trio of experimental researchers at Bell Laboratories conducted the earliest work on auditory graphing; Max Mathews, F.R. Moore, and John M. Chambers wrote a technical memorandum called “Auditory Data Inspection.” They augmented a scatterplot -a mathematical diagram using Cartesian coordinates to display values for two or more variables in a data set- using a variety of sounds that changed frequency, spectral content, and amplitude modulation according to the points on their diagram. Two years later the technology and science philosopher Don Ihde wrote in his book, Listening and Voice: phenomenologies of sound, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." Ihde pointed to using the tool of sonification for creativity, so that we might in effect, be able to listen to the light of the stars, the decomposition of soil, the rhythm of blood pulsing through the veins, or to make a composition out of the statistics from a series of baseball games. It wasn’t long before musical artists headed out to carve a way through the woods where Ihde had suggested there might be a trail. Sonification Techniques There are many techniques for transforming data into audio dada. The range of sound, its many variables and a listener’s perception give ample parameters for transmitting information as audio. Increasing or decreasing the tempo, volume, or pitch of a sound is a simple method. For instance, in a weather sonification app temperature could be read as the frequency of one tone that rises in pitch as temperature and lowers as it falls. The percentage of cloud cover could be connected to another sound that increases or decreases in volume according to coverage, while wind speed could be applied as a resonant filter across another tone. The stereo field could also be used to portray information with a certain set of data coming in on the left channel, and another set on the right. The audio display of data is still in a wild west phase of development. No standard set of techniques has been adopted across the board. Do to the variables of information presented, and the setting of where it is presented, researchers in this field are working towards determining which set of sounds are best suited for particular applications. Programmers are writing programs or adapting existing ones to be able to parse streams of information and render it according to sets of sonification rules. One particular technique is audification. It can be defined as a "direct translation of a data waveform to the audible domain." Data sequences are interpreted and mapped in time to an audio waveform. Various aspects of the data correspond to various sound pressure levels. Signal processing and audio effects are used to further translate the sound as data. Listeners can then hear periodic components as frequencies of sound. Audification thus requires large sets of data containing periodic components. Developed by Greg Kramer in 1992 the goal was to allow listeners to be able to hear the way scientific measurements sounded. Audification has a number of applications in medicine, seismology, and space physics. In seismology, it is used as an additional method of earthquake prediction alongside visual representations. NASA has applied audification to the field of astrophysics, using sounds to represent various radio and plasma wave measurements. There are many musicians who are finding inspiration in using the sets of data culled from astronomy and astrophysics in the creation of new works. It’s an exciting development in the field of music. American composer Gordon Mumma had been inspired by seismography and incorporated it into his series of piano works called Mographs. A seismic wave is the energy moving through the Earth's layers caused by earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions. All of these events give out low-frequency acoustic energy that can be be picked up by a seismograph. A seismogram has wiggly lines going all across it. These are all the seismic waves that the seismograph has recorded. Most of the waves are small because no one felt them, little tiny waves called microseisms can even be caused by ocean waves hitting the beach, heavy traffic of rumbling semi-trucks, and other things that might cause the seismograph to shake. Little dots along the graph show the minutes so the seismic waves can be seen in time. When there is seismic activity the P-wave is the first wave to be bigger than the small normal microseisms. P waves are the fastest moving seismic wave and these are usually the first to be recorded by a seismograph. The next set of waves on the seismogram are the S-waves. S-waves have a higher frequency than the P-waves and appear bigger on the seismogram. Mumma based the structure and activity of each Mograph around data derived from seismogram recordings of earthquakes and underground nuclear explosions. The seismograms he was looking at were part of cold-war research that attempted to verify the differences between various seismic disturbances. The government wanted to know if it was a nuke that had hit San Francisco or just another rumbling from the earth. For Mumma, the structural relationships between the way the patters of P-waves and S-waves traveled in time, and their reflections, had the “compositional characteristics of musical sound-spaces”. One of the strategies he used to sonify the seismograms into music was to limit the pitch-vocabulary and intervals in each work. This gave Mumma the ability draw attention to the complexity of time and rhythmic events within each Mograph. With these themes in mind, listening to the Mograph is like hearing tectonic plates being jostled around, here hitting each other abruptly, and there in a slow silence that grinds as two plates meet. It is the sound of very physical waves rumbling through earth and stone and dirt, and beneath concrete, as interpreted by the piano, or pairs of pianos used in some arrangements. In making these pieces from seismograph data Gordon Mumma sketched a process for others to use in future works of sonification. By the Code of Soil Another down to earth sonification project deals with the soil beneath our feet. It started out as a commission for artist Kasia Molga from the GROW Observatory, a citizen science organization working to take action on climate change, build better soil and grow healthier food, while using data provided by the European Space Agencies Copernicus satellites to achieve their goals. Kasia began her project by analyzing the importance and meaning of soil, and she looked at what is happening to the soil now and how that impacts farmers, urbanites, and well, everyone. She listened to the concerns of the scientists at GROW and spent a chunk time parsing the data from the GROW sensors and the Sentinel-1A satellite that is used to asses soil moisture across Europe. In the course of her background work Kasia wondered how she could get important information about soil health out there to the largest number of people and she hit upon the idea of using a computer virus. The resulting project, By the Code of Soil, ended up working with peoples computers and smart phones. The program didn’t install any malware, self-replicate, or actually infect anyone’s computer, but rather worked as a way to interrupt those people who spend most of their time in front of screens and remind them of the real analog world underneath their feet. She recruited a few other people to work with her on the project, tech artists Erik Overmeire and Dan Hett, and musician Robin Rimbaud, aka Scanner. Their project turns soil data into digital art that appears on a participants’ computer (downloaded as an app) whenever land-mapping satellite Sentinel-1A passes overhead. The Sentinel satellite missions include radar and super-spectral imaging for land, ocean and atmospheric monitoring. Each Sentinel mission is based on a constellation of two satellites that fulfill and revisit the coverage requirements for each individual mission. This provides a robust dataset for researchers to access here on Earth. Sentinel-1 provides all-weather, day and night radar imaging for land and ocean services. GROW Observatory has gotten involved by deploying thousands of soil sensors all across Europe to improve the accuracy of the observations from the orbiting birds. Kasia designed the video art for the piece. Twice a day the Sentinel-1 passes overhead in Europe and the artwork and sounds change in real time as driven by the data. Kasia writes, “The artwork takes control of user’s computer for a minute or two in full screen mode. It manifests itself in a quite unexpected manner – that is it only will become visible on the computer when the Sentinel-1A satellite passes by the computer’s location – approximately twice within 24 hours but never at the same time of the day.” This is how it reacts like a virus, erupting unexpectedly (unless you happen to be tracking the movement of the satellite). To portray the soil data visually Kasia started with a pixel and a matrix. She thought of these as single grains of soil, from which something else can be created and emerge. She used visual white noise, like that of a TV on station with a channel with no broadcast, to show a signal coming out of the noise when the satellite passes, activating the algorithm written for the piece. “Various configurations of the noise – its frequencies, shapes, speed of motion and sizes – reflect the moisture, light, temperature and texture of the land near to the participant’s computer based on its IP address.” Meanwhile Scanner handled the sound design for the project. He took a similar approach as Kasia and looked at the granular aspects of sound. “Trying to score data was a seemingly impossible task. How to soundtrack something that is ever changing, ever developing, ever in flux, refusing to remain still. Most times when one accompanies image with sound the image is locked, only to repeat again and again on repeated viewing. By the Code of Soil refuses to follow this pattern. Indeed it wasn’t until I watched the work back one evening, having last seen it the previous morning, that I realized how alive data can really be. The only solution sonically was to consider sound, like soil, as a granular tool. The sound needed to map the tiniest detail of alterations in the data received so I created sounds that frequently last half a second long and map these across hundreds of different possibilities. It was a like a game of making mathematics colorful and curiously one can only hear it back by following the App in real time. I had to project into the future what I felt would work most successfully, since I never knew how the data would develop and alter in time either. As such the sound is as alive as the images, as malleable as the numbers which dictate their choices. Data agitates the sound into a restless and constantly mutable soundscape.” He spent many hours designing a library of sounds with Native Intstruments Reaktor and GRM Tools and then mapping them into families. These families of sound were in turn mapped onto various aspects of the data. With the data coming into the satellite from the sensors, and the data collected from the sensors feeding into the program, different sets of sounds and visuals were played according to the system. The success of this project for Kasia Molga and Scanner has led to them working together again in creating another multimedia work, Ode to Dirt, using soil data as a source code, for content, and inspiration. In this piece “(de)Compositions bridges the source (input) and the data (output) through inviting viewers to take part in a multi sensory experience observing how the artwork - a fragment of the ‘land’ - changes through time - its form, sound and even smell - determined by the activities of the earthworms.” READING MUSIC: LISTENING AS INFORMATION EXTRACTION Many musicians know how to read sheet music. For composers it’s a basic tool. But what if average people learned how to read music, that is, listen to a composition and extract information from it as if it were a couple of paragraphs of text, or for really long works, a whole book? It strikes me that this is a distinct possibility as the field of sonification grows. Just as we have learned to signify and interpret letters and words, we may eventually come to have another shared grammar of sound that allows people to listen to the music of data and interpret that text with our ears. This new way of reading music as information has the possibility of transforming the field of radio as the imagination is opened up to new ways of receiving knowledge. It would be interesting to create radio that included sonified data as a regular part of news stories. This project of mapping knowledge to sound is implicit in Hesse’s description of the Glass Bead Game. Sonification is another way to bring it about as a reality. Yet to make the most of this listening opportunity, to listen to music in a way analogous to reading a book, we will have to grow new organs of perception. Pauline Oliveros started the work of carving out new pathways for the way we perceive the world in her Deep Listening workshops, concerts and work in general. This work is being continued by her partner Ione, and others trained in the skills of Deep Listening. Kim Cascone has also taught workshops on the subject of what he calls Subtle Listening. Through a variety of meditation and other exercises Kim teaches his students how to “grow new organs of perception”. Perhaps through techniques such as these we may learn to listen to data in a way that engages the imagination and transforms it into knowledge. REFERENCES: Listening and Voice: A phenomenology of sound by David Idhe, State University of New York, 2007 David Tudor & Gordon Mumma, Rainforest / 4 Mographs, Sections X and 7 from Gestures, New World Records, 2006 https://archive.growobservatory.org/code-of-soil.html https://sentinel.esa.int/web/sentinel/missions/sentinel-1 https://vertigo.starts.eu/calls/2017/residencies/ode-from-the-dirt/detail/ Robin Rimbaud (project documentation sent in personal communication, September 29 2020) http://www.studiomolga.com/codeofsoil/ http://scannerdot.com/ https://vertigo.starts.eu/article/detail/by-the-code-of-soil-in-greece/ https://sonicfield.org/2014/03/subtle-listening-how-artists-can-develop-new-perceptual-circuits/ https://www.deeplistening.rpi.edu/deep-listening/ Read the rest of the RADIOPHONIC LABORATORY series. |
Justin Patrick MooreAuthor of The Radio Phonics Laboratory: Telecommunications, Speech Synthesis, and the Birth of Electronic Music. Archives
August 2024
Categories
All
|