Another way Information Theory has been used in the making of music is through the sonification of data. It is the audio equivalent of visualizing data as charts, graphs, and connected plot points on maps full of numbers. Audio, here meaning those sounds that fall outside of speech categories, has a variety of advantages to other forms of conveying information. The spatial, tempo, frequency and amplitude aspects of sound can all be used to relay different messages. One of the earliest and most successful tools to use sonification has been the Geiger counter from 1908. Its sharp clicks alert the user to the level of radiation in an area and are familiar with anyone who is a fan of post-apocalyptic sci-fi zombie movies. The faster the tempo and number of clicks the higher the amount of radiation detected in an area. A few years after the Geiger counter was invented Dr. Edmund Fournier d'Albe came up with the optophone, a system that used photosensors to detect black printed typeface and convert it into sound. Designed to be used by blind people for reading, the optophone played a set of group notes: g c' d' e' g' b' c. The notes corresponded with positions on the reading area of the device and a note was silenced if black ink was sensed. These missing notes showed the positions where the black ink was and in this way a user could learn to read a text via sound. Though it was a genius invention the optophone didn’t catch on. Other areas where sonification did get used include pulse oximeters (a device that measures oxygen saturation in the blood), sonar, and auditory displays inside aircraft cockpits, among others. In 1974 a trio of experimental researchers at Bell Laboratories conducted the earliest work on auditory graphing; Max Mathews, F.R. Moore, and John M. Chambers wrote a technical memorandum called “Auditory Data Inspection.” They augmented a scatterplot -a mathematical diagram using Cartesian coordinates to display values for two or more variables in a data set- using a variety of sounds that changed frequency, spectral content, and amplitude modulation according to the points on their diagram. Two years later the technology and science philosopher Don Ihde wrote in his book, Listening and Voice: phenomenologies of sound, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." Ihde pointed to using the tool of sonification for creativity, so that we might in effect, be able to listen to the light of the stars, the decomposition of soil, the rhythm of blood pulsing through the veins, or to make a composition out of the statistics from a series of baseball games. It wasn’t long before musical artists headed out to carve a way through the woods where Ihde had suggested there might be a trail. Sonification Techniques There are many techniques for transforming data into audio dada. The range of sound, its many variables and a listener’s perception give ample parameters for transmitting information as audio. Increasing or decreasing the tempo, volume, or pitch of a sound is a simple method. For instance, in a weather sonification app temperature could be read as the frequency of one tone that rises in pitch as temperature and lowers as it falls. The percentage of cloud cover could be connected to another sound that increases or decreases in volume according to coverage, while wind speed could be applied as a resonant filter across another tone. The stereo field could also be used to portray information with a certain set of data coming in on the left channel, and another set on the right. The audio display of data is still in a wild west phase of development. No standard set of techniques has been adopted across the board. Do to the variables of information presented, and the setting of where it is presented, researchers in this field are working towards determining which set of sounds are best suited for particular applications. Programmers are writing programs or adapting existing ones to be able to parse streams of information and render it according to sets of sonification rules. One particular technique is audification. It can be defined as a "direct translation of a data waveform to the audible domain." Data sequences are interpreted and mapped in time to an audio waveform. Various aspects of the data correspond to various sound pressure levels. Signal processing and audio effects are used to further translate the sound as data. Listeners can then hear periodic components as frequencies of sound. Audification thus requires large sets of data containing periodic components. Developed by Greg Kramer in 1992 the goal was to allow listeners to be able to hear the way scientific measurements sounded. Audification has a number of applications in medicine, seismology, and space physics. In seismology, it is used as an additional method of earthquake prediction alongside visual representations. NASA has applied audification to the field of astrophysics, using sounds to represent various radio and plasma wave measurements. There are many musicians who are finding inspiration in using the sets of data culled from astronomy and astrophysics in the creation of new works. It’s an exciting development in the field of music. American composer Gordon Mumma had been inspired by seismography and incorporated it into his series of piano works called Mographs. A seismic wave is the energy moving through the Earth's layers caused by earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions. All of these events give out low-frequency acoustic energy that can be be picked up by a seismograph. A seismogram has wiggly lines going all across it. These are all the seismic waves that the seismograph has recorded. Most of the waves are small because no one felt them, little tiny waves called microseisms can even be caused by ocean waves hitting the beach, heavy traffic of rumbling semi-trucks, and other things that might cause the seismograph to shake. Little dots along the graph show the minutes so the seismic waves can be seen in time. When there is seismic activity the P-wave is the first wave to be bigger than the small normal microseisms. P waves are the fastest moving seismic wave and these are usually the first to be recorded by a seismograph. The next set of waves on the seismogram are the S-waves. S-waves have a higher frequency than the P-waves and appear bigger on the seismogram. Mumma based the structure and activity of each Mograph around data derived from seismogram recordings of earthquakes and underground nuclear explosions. The seismograms he was looking at were part of cold-war research that attempted to verify the differences between various seismic disturbances. The government wanted to know if it was a nuke that had hit San Francisco or just another rumbling from the earth. For Mumma, the structural relationships between the way the patters of P-waves and S-waves traveled in time, and their reflections, had the “compositional characteristics of musical sound-spaces”. One of the strategies he used to sonify the seismograms into music was to limit the pitch-vocabulary and intervals in each work. This gave Mumma the ability draw attention to the complexity of time and rhythmic events within each Mograph. With these themes in mind, listening to the Mograph is like hearing tectonic plates being jostled around, here hitting each other abruptly, and there in a slow silence that grinds as two plates meet. It is the sound of very physical waves rumbling through earth and stone and dirt, and beneath concrete, as interpreted by the piano, or pairs of pianos used in some arrangements. In making these pieces from seismograph data Gordon Mumma sketched a process for others to use in future works of sonification. By the Code of Soil Another down to earth sonification project deals with the soil beneath our feet. It started out as a commission for artist Kasia Molga from the GROW Observatory, a citizen science organization working to take action on climate change, build better soil and grow healthier food, while using data provided by the European Space Agencies Copernicus satellites to achieve their goals. Kasia began her project by analyzing the importance and meaning of soil, and she looked at what is happening to the soil now and how that impacts farmers, urbanites, and well, everyone. She listened to the concerns of the scientists at GROW and spent a chunk time parsing the data from the GROW sensors and the Sentinel-1A satellite that is used to asses soil moisture across Europe. In the course of her background work Kasia wondered how she could get important information about soil health out there to the largest number of people and she hit upon the idea of using a computer virus. The resulting project, By the Code of Soil, ended up working with peoples computers and smart phones. The program didn’t install any malware, self-replicate, or actually infect anyone’s computer, but rather worked as a way to interrupt those people who spend most of their time in front of screens and remind them of the real analog world underneath their feet. She recruited a few other people to work with her on the project, tech artists Erik Overmeire and Dan Hett, and musician Robin Rimbaud, aka Scanner. Their project turns soil data into digital art that appears on a participants’ computer (downloaded as an app) whenever land-mapping satellite Sentinel-1A passes overhead. The Sentinel satellite missions include radar and super-spectral imaging for land, ocean and atmospheric monitoring. Each Sentinel mission is based on a constellation of two satellites that fulfill and revisit the coverage requirements for each individual mission. This provides a robust dataset for researchers to access here on Earth. Sentinel-1 provides all-weather, day and night radar imaging for land and ocean services. GROW Observatory has gotten involved by deploying thousands of soil sensors all across Europe to improve the accuracy of the observations from the orbiting birds. Kasia designed the video art for the piece. Twice a day the Sentinel-1 passes overhead in Europe and the artwork and sounds change in real time as driven by the data. Kasia writes, “The artwork takes control of user’s computer for a minute or two in full screen mode. It manifests itself in a quite unexpected manner – that is it only will become visible on the computer when the Sentinel-1A satellite passes by the computer’s location – approximately twice within 24 hours but never at the same time of the day.” This is how it reacts like a virus, erupting unexpectedly (unless you happen to be tracking the movement of the satellite). To portray the soil data visually Kasia started with a pixel and a matrix. She thought of these as single grains of soil, from which something else can be created and emerge. She used visual white noise, like that of a TV on station with a channel with no broadcast, to show a signal coming out of the noise when the satellite passes, activating the algorithm written for the piece. “Various configurations of the noise – its frequencies, shapes, speed of motion and sizes – reflect the moisture, light, temperature and texture of the land near to the participant’s computer based on its IP address.” Meanwhile Scanner handled the sound design for the project. He took a similar approach as Kasia and looked at the granular aspects of sound. “Trying to score data was a seemingly impossible task. How to soundtrack something that is ever changing, ever developing, ever in flux, refusing to remain still. Most times when one accompanies image with sound the image is locked, only to repeat again and again on repeated viewing. By the Code of Soil refuses to follow this pattern. Indeed it wasn’t until I watched the work back one evening, having last seen it the previous morning, that I realized how alive data can really be. The only solution sonically was to consider sound, like soil, as a granular tool. The sound needed to map the tiniest detail of alterations in the data received so I created sounds that frequently last half a second long and map these across hundreds of different possibilities. It was a like a game of making mathematics colorful and curiously one can only hear it back by following the App in real time. I had to project into the future what I felt would work most successfully, since I never knew how the data would develop and alter in time either. As such the sound is as alive as the images, as malleable as the numbers which dictate their choices. Data agitates the sound into a restless and constantly mutable soundscape.” He spent many hours designing a library of sounds with Native Intstruments Reaktor and GRM Tools and then mapping them into families. These families of sound were in turn mapped onto various aspects of the data. With the data coming into the satellite from the sensors, and the data collected from the sensors feeding into the program, different sets of sounds and visuals were played according to the system. The success of this project for Kasia Molga and Scanner has led to them working together again in creating another multimedia work, Ode to Dirt, using soil data as a source code, for content, and inspiration. In this piece “(de)Compositions bridges the source (input) and the data (output) through inviting viewers to take part in a multi sensory experience observing how the artwork - a fragment of the ‘land’ - changes through time - its form, sound and even smell - determined by the activities of the earthworms.” READING MUSIC: LISTENING AS INFORMATION EXTRACTION Many musicians know how to read sheet music. For composers it’s a basic tool. But what if average people learned how to read music, that is, listen to a composition and extract information from it as if it were a couple of paragraphs of text, or for really long works, a whole book? It strikes me that this is a distinct possibility as the field of sonification grows. Just as we have learned to signify and interpret letters and words, we may eventually come to have another shared grammar of sound that allows people to listen to the music of data and interpret that text with our ears. This new way of reading music as information has the possibility of transforming the field of radio as the imagination is opened up to new ways of receiving knowledge. It would be interesting to create radio that included sonified data as a regular part of news stories. This project of mapping knowledge to sound is implicit in Hesse’s description of the Glass Bead Game. Sonification is another way to bring it about as a reality. Yet to make the most of this listening opportunity, to listen to music in a way analogous to reading a book, we will have to grow new organs of perception. Pauline Oliveros started the work of carving out new pathways for the way we perceive the world in her Deep Listening workshops, concerts and work in general. This work is being continued by her partner Ione, and others trained in the skills of Deep Listening. Kim Cascone has also taught workshops on the subject of what he calls Subtle Listening. Through a variety of meditation and other exercises Kim teaches his students how to “grow new organs of perception”. Perhaps through techniques such as these we may learn to listen to data in a way that engages the imagination and transforms it into knowledge. REFERENCES: Listening and Voice: A phenomenology of sound by David Idhe, State University of New York, 2007 David Tudor & Gordon Mumma, Rainforest / 4 Mographs, Sections X and 7 from Gestures, New World Records, 2006 https://archive.growobservatory.org/code-of-soil.html https://sentinel.esa.int/web/sentinel/missions/sentinel-1 https://vertigo.starts.eu/calls/2017/residencies/ode-from-the-dirt/detail/ Robin Rimbaud (project documentation sent in personal communication, September 29 2020) http://www.studiomolga.com/codeofsoil/ http://scannerdot.com/ https://vertigo.starts.eu/article/detail/by-the-code-of-soil-in-greece/ https://sonicfield.org/2014/03/subtle-listening-how-artists-can-develop-new-perceptual-circuits/ https://www.deeplistening.rpi.edu/deep-listening/ Read the rest of the RADIOPHONIC LABORATORY series.
0 Comments
Leave a Reply. |
Justin Patrick MooreAuthor of The Radio Phonics Laboratory: Telecommunications, Speech Synthesis, and the Birth of Electronic Music. Archives
August 2024
Categories
All
|