Friday, November 2, 2018

Sonification

Sonification - EMPAC 11-01-2018

Chris Chafe's lecture on Sonification took on a new aspect of art, combining technology, science, and music together to express data.  Chafe coined the term sonification while working at the Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). 
Chafe’s began his lecture discussing an entertainment piece centered around the data of carbon dioxide levels of ripening tomatoes.  Chafe used computer-generated algorithms to create a mapping strategy of the amount of carbon dioxide produced by the tomatoes and the amount of light (that ran through the gallery) on the tomatoes over time.  Fifteen channels were to interpret data, creating a sci-fi like soundtrack that encompassed listeners from all angles, causing the listener to feel immersed within the music.  This computer-generated music is unlike anything I have ever heard, consisting of pulsing beats with reverbing sounds.  At the end of the large build-up of increasing tones and pulsing of beats, the music dies off, representing the fully ripened tomato.  Ironically, the tomatoes are used to make tomato sauce for a full multimedia approach.
            There was an extremely eye-opening moment when Chafe discussed the ice drilling in Antarctica that detects climate change.  Using bubbles inside ice cores, one could look at different indicators of climate change.  The Eemian period was a period of time before the last ice age that was about 5 to 10 degrees warmer than current day.  This high temperature, an effect of global warming, caused the Earth to plunge into a deep Ice Age, giving scientists an analog of what the future looks like and the oncoming ice age.  Chafe took the bubble data, showing the rising and falling of the Earth’s CO2 and created a multitoned piece.  The right ear gave sound to changes in carbon dioxide concentrations and the left ear gave sound to temperature.  The higher the pitch, the higher the concentration/temperature.  One was able to appreciate the changing time periods, as the onset of the 1900’s cause for a rapid increase of pitch and tempo in both ears, a terrifying realization of our polluted world.
            My personal favorite demonstration of data presented used data from the DNA sequencing of a plasmid, pairing different tones with different nucleotide identities.  The piece had a voice-over on top of the soundtrack, identifying what part of the plasmid correlated to which sounds.  For a background, a plasmid is a piece of circular DNA in which it contains a promoter, a set of genes, and a terminator.   As one goes around the DNA, the data identifies which nucleotide (ATCG) is present.  During the piece, one could hear the sound travel around the room in a circle, just as the DNA sequence of a plasmid did.  The sound created was very plunky and decisive as one could hear each individual nucleotide.  After a couple of minutes, the data moved from nucleotides to codons, as one could here three specific codons that could code for a protein sequence.  I just found this extremely interesting as it combined music and biology, two fields one does not usually correlate. 



            Overall, I enjoyed the lecture series/presentation of sonification and would like to explore more into the idea of using sound to express an idea.  

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.