Sounds of Seismic (SOS) is an art-science, auditory display software system broadcasting
continuous seismic sound generated from realtime collected global earthquake data. An internet audio streaming service, SOS
webcasts electroacoustic music as multi-channel seismic generated sounds creating an infinite
computational earth system soundscape!
A multi-disciplinary, internet based sound performance which incorporates science, technology, engineering, art, and mathematics
(STEAM). The idealogical premise behind SOS is to create greater social awareness of natural
ecological systems by processing seismic waveform data. SOS creates continuous and autonomous
streaming seismic audio compositions which transpose the frequency, depth, location and energy release of natural and man-made seismic events.
Influenced by the seminal work of John Cage's 'Variations VII' (1966), SOS is a continuous music/sound composition in which the score is
determined randomly by the occurrence of global seismic events. SOS is also a synthesis of the avant-garde sound art
form 'Musique Concrete
In the 1940's Pierre Schaeffer conceived the theoretical underpinnings and conceptual framework for thinking about musical
forms created from non-musical, or 'found' sounds. SOS algorithms process seismic wave-form data
collected by seismometer's as grand Musique Concrete instruments performing a continuous and
autonomous composition streamed across the internet 24/7 - 365 days per year.
Earthquakes are essentially sound waves travelling from the focal point of seismic rupture. The naturally occurring geophysical properties of
earthquakes effectively cause the earth to ring like a bell as seismic waves propagate and travel through the earth. Seismic waves have a frequency
spectrum below 1 Hz. The human audio spectrum ranges between 20 Hz - 20 kHz which is much above the spectrum of seismic waves. While the
frequencies of seismic waves are below the range of human hearing, one can speed up digitally recorded seismograms by factors between x276 to x2205
creating sounds audible to the human ears acoustic criteria of listening to sound.
To create music from seismic waveform data, audification and sonification
techniques are used to create auditory display. Audification and sonification is defined as the transformation of data into perceived relations of acoustic
signals for the purposes of facilitating communication and/or interpretation. SOS makes audible the sounds of earthquake data producing realtime seismic
music. This interdisciplinary collaboration is between media-artists, software developers, musicians and geophysicists from Australia, Netherlands and the USA.
The Sounds of Seismic system utilises Earthquake Sound Engine (ESE), a C++ engine developed by Ryan McGee
at the AlloSphere Research Group during 2012. ESE processes miniSEED seismic waveform data generating sound files in flavours .mp3, .wav and .ogg audio-files
which will be broadcasted and streamed to the internet.
ESE generated sound file formats for streaming internet audio are processed with dynamic-range compression, time scaling compression, high-pass
filters and granular synthesis audio techniques which keep within the ranges of the seismic wave equation and 'moment magnitude' scale determined
by computational logarithmic calculation.
Python scripted software developed by Stock Plum
collects near real-time digital seismic waveform data from Incorporated Research Institutions for Seismology (IRIS) data
repositories and pipes this seismic data into the Earthquake Sound Engine generating near-realtime seismic auditory display.
Ref: IEI list of recent seismic events.
A content delivery platform for Sounds of Seismic publishes internet audio compositions generated by Earthquake Sound Engine. The public interface
of SOS designed by Mr Snow will enable users to select single and multi-channel streams of seismic data. SOS interface extensions will be made
available enabling mix of Soundcloud sets inviting public and user generated seismic sound art mixes.
Web server hosting of the SOS system will be hosted on a development server in for the first year of public
display. During this launch presentation period and delivery to the internet, the project will seek support
towards the future hosting of SOS on earth science institutions such as, Geoscience Australia,
GeoNet New Zealand, US Geological Survey (USGS) and the Incorporated
Research Institutions for Seismology (IRIS).
Intended Project Outcomes
Development of SOS is a collaboration between media artists, software developers, musicians and geophysicists and is a sound
art extension of research undertaken at the Allosphere Research Facility between Ryan McGee and D.V. Rogers during early 2012.
A key outcome with research and development of SOS is to demonstrate how inter-disciplinary
collaboration and divergent thinking between the sciences and humanties leads to creative visionary innovation.
Science and art are fundamentally alike as they both use modes of enquiry seeking to discover the other, while at the same
time artists and scientists approach creativity, exploration and research from different perspectives. Scientific and
technology aspire to clean, clear answers to problems, whereas the humanities address ambiguity, doubt and skepticism -
essential underpinnings in a complex, diverse and turbulent world.
If creativity, collaboration, communication, and critical thinking - all touted as the hallmark skills for 21st century success
are to be cultivated, we need to ensure that the sciences, technology, engineering and math are drawn closer to the arts. The
arts and sciences are avatars of human creativity and nearly all of the great inventors and scientists were also musicians,
artists, writers or poets. Galileo, Michelangelo,
Da Vinci, Zhang Heng, Einstein to name a few.
SOS software development aims to demonstrate how creative techniques of listening to live seismic data sets could have social
outreach effect as ecological driven, public audio broadcast. SOS will also enable earth scientists a tool in which they can
listen to specific sensors they are actively researching in their study of geological fault and volcanic systems.
It is envisioned SOS could have influence towards other ecological data sets being transposed as live, auditory display systems.
SOS will also enable composers to work with a tool that generates multi-channel streams of seismic audio, inviting the creation of musical
compositions derived from seismic waveform data monitoring and recording the tectonic shift of the dynamic planet in which we inhabit.
Ryan McGee (UCSB) - Earthquake Sound Engine Development
Andy Michael (USGS) - Scientific Director
Stock Plum (NL) - Seismic Data Processing Development
D.V. Rogers (NZ/AUS) - Producer
Mr Snow (NZ/AUS) - User Interface Development
Seismic Auditory Display - Ryan McGee
Auditory Seismology - Florian Dombois
Earthquake Quartet #1 - Andy Michael
Earthquake Music - Zhigang Peng
Mori - Ken Goldberg
Tectonic - Micah Frank
Seismic Sounds - Soundcloud
Interpretations of Data from the Seismic Field - D.V. Rogers
©© D.V. Rogers 02009-02013