Data is awesome, too. It presents the way we store our observations of the world we live in. Wavetable is a sound synthesis technique used to create periodic waveforms by using data.
Today, our portable machines are way more powerful than the first machines used for the wavetable synthesis, and they allow us to change the sound synthesis programs as they run. This activity is known as live coding and has been around for 20 years.
The workshop explores data-driven wavetable synthesis within a live coding context, and is a collaboration between Iván Paz and Julia Múgica, members of the lively Barcelona’s live coding community.
Join Iván and Julia at their sonification workshop where the data collected from natural processes will be translated into wavetables to make sound. The results will be used within a live coding context, so whether you’re interested in sound synthesis or live coding, this workshop is right up your alley!
About the mentors
Julia Múgica is a mexican scientist currently incurring in the artistic exploration of nature complex processes. With an interdisciplinary background that encompasses biology and computational physics, she is deeply interested in understanding how collectives make decisions that result in a behavioral synchrony. Recently, her curiosity extended to the artistic sphere, where the process of creation magnifies and prioritizes different aspects of the same phenomena. Her work includes animated particles design in processing language, noise design from random walks algorithms for modular synthesizers, and collaborations with the artist Lina Bautista in rhythm and collective patterns with interactive robots.
Iván Paz has backgrounds in physics, music and computer science. Iván’s work is framed in critical approaches to technology centered around from-scratch construction as an exploratory technique. Since 2010, he has been part of the live coding community and has presented workshops, conferences and concerts around America and Europe. He is currently working with machine learning techniques within live coding performance.
Alicia Champlin is working on a hybrid digital-acoustic instrument using a handmade OpenBCI EEG headset along with her own MaxMSP live/realtime data sonification application for EEG data, as a partial input to a modified bow chime (somewhat styled after the Robert Rutman projects). The outcome is drone music from mechanically amplified bowed cymbals in a live feedback loop with brainwaves and the player. The bow chime amplifies both the brain synth and the player’s physical interactions, sounding out the intersection between the resonant frequencies of the brain and those of the instrument itself.
You can listen to a performance with the prototype here:
Starting with a host of existing MaxMSP patches which Alicia built for a previous sound project, she will be reviving and reworking the synth components to optimize/tune the output for best effect with the bow chime, and at the same time will be exploring whether she can replicate these synths with Pure Data in order to free the product from MaxMSP and create a truly open-source version.
Anyone interested in how the brain can be expressed in sound, those who know and use PD, and other brain synth makers are more than welcome to join Alicia on her quest and on the stage!
Here are some of Niklas Reppel’s thoughts on the practice: “In the end, the computer is an extension of ourselves, so bringing it to natural environments isn’t an attempt to ‘technologize’ nature, but just bringing our extended eyes, ears, and mind with us, even if it can sometimes present a logistical challenge. So in the end it’s not an attempt to bring technology to nature, but to bring ourselves, we who are cyborgs (as Andy Clark put it). In that sense, it’s not even an attempt at ‘reconciliation’ of nature and technology, if we don’t accept the split between us, nature, and technology. Technology is (or rather, can or should be) an extension of ourselves, and we are part of nature, anyway.”
During PIFcamp, Niklas will initially explore the soundscape in and around the camp by walking and listening, and select acoustically interesting spots. He’ll then apply a variety of recording techniques to create different samples of the same spots, and improvise upon the found soundscape with live coded live-sampling to bring out interesting nuances and different aspects of the sound.
His goal is to make his live coded performances more dynamic and include the physical aspects of the sound in his improvisations. The sound processing will be done in his own open-source software, Mégra, and he’ll be happy to share his knowledge and the stage with anyone who’d like to join.
Živa is a toolset for easy live coding in SuperCollider. During Roger Pibernat’s workshop we’ll be covering everything needed to setup a live coding environment and start playing cool music – in minutes! We’ll start with the installation process, then learn to set up the environment and go through the syntax. We will also learn some tips and tricks for fluent live coding during performances, and wrap it up with a final participants’ jam session.
No prior knowledge of coding or music theory is required. Just bring your laptop and headphones. If you can write, you can live code!
Roger drew from his experiences with SuperCollider and the issues he’s stumbled upon in his (and his colleagues’) performances. Živa could be considered a guide for live coders who wish to improve their knowledge of the instrument, but is also suitable for complete beginners.
»If the word cyborg – cybernetic organism – describes a fusion between a living organism and a piece of technology, then we, like all other life-forms are symborgs, or symbiotic organisms.« Enhancing sym-cy- orgian aspects of existence, Efe Di will be developing a wearable »sensory organ« containing intimate co-habitation between mycelium, electronics and human.
Mycelial growth on petri dish will act as an external visualizer of internal psychophysiological processes, or the anxiety-and-stress age. Electrocardiogram (ECG) signal measurements will be transformed into sound that will be played to mycelial body via microcontroller. Fungi are capable of sound perception and respond to different frequencies with change in growth and metabolism. Therefore, stress related factors of ECG signal can be transformed into both favorable and unfavorable sound frequencies in real time. In short: when you are in a good mood, mycelium is also in a good mood. The result is a mycelial map of mental states.
Modern standard healthcare diagnosis works in strictly rational realm, collecting and analyzing cold objective data. Findings are rarely interpreted in a way accessible to a layman, so the patient is excluded from discussion about his health state and often has no idea what is going on with and within his own body. Efe Di askes: is there another way to provide insight into physically hidden body processes? We know that our brain has evolved for recognizing patterns, but it is weak in processing logic and making calculations. What happens if we visualize these states in the form of a living being that is drawing shapes of emotions? Can these visualizations change the way we perceive harmful behavior towards ourselves? Can we feel it more deeply, emotionally, and mythological?
Attending as a family, Laura and August are intending on developing a project in collaboration with their children. The focus will be on relating sound and image/craft in a fun and playful ways accessible to young persons while conceptually interesting for adults as well. They will be making a portable system for making music in nature.
First step is to search and collect natural objects around PIFcamp and capture the sounds of those different objects. The next step is arranging objects (rocks, sticks, leaves) on top of a long handwoven blanket with patterning to support object placement and then running the camera over the blanket, from start to end. That will control the playback and recording. The system will utilize our experience developing interactive camera-based web applications, August’s expertise in real-time audio synthesis, and Laura’s experience weaving. Plan is to create a system that is highly portable, playful, and keeps the majority of the interaction in and with nature.
What to do when a friend gifts you a bunch of outdated vacuum tubes? Apparently, it is possible to build a very nice low voltage tube preamp.
Car Valves is a project based on old outdated vacuum tubes. It is built around the ECH 83 which was originally designed to be used in old car radios. It has one amplifying stage [triode] and a stage that seems to be used within the radio receiving circuit [heptode]. In its original function, it has been powered by 12V car batteries. Compared to conventional high-voltage tubes, this property makes it an ideal and harmless object for experimentation. Join Ludwig Klöckner and build yours.
(Monster = imagination Code = encoding into the environment)
Before we had books knowledge was passed down orally, sometimes it was ‘stored’ directly into the landscape. Using small data-sets we will overlay physical space with a metal landscape during PIFcamp and experience how to build them up so they can hold vast amounts of knowledge in a single space you can walk through. Theun will also bring a first prototype of a device that tries to give a physical experience of animal vocalizations.
We feel our own voices when we speak, but can we physically experience the voices of animals? This prototype is a first step and is ready to be tested for feedback to develop it further.
Scott Kildall will be developing a new installation called “Machine Alps”, which depicts what trees, plants and fungi might sense from human interference. Using sensors connected to the leaves of plants, barks of trees and surfaces of mycelium, several sculptural nodes will drive low level synthesizers based on live data, using recorded samples of machine noise. This will be a performance where bleeps, churns, grinds and other disruptions get orchestrated into a coherent soundscape.
The electromagnetic field is a widely used medium for the transportation of information. Yet its use through free space propagation (meaning not using fiber-optic or electric cables) is mostly limited to the invisible radio spectrum. Those radio waves at comparably low frequencies can travel through walls, beyond mountains, and interplanetary dust clouds, and usually don’t keep us from sleeping. Visible light, on the other hand, can be blocked by a single fly alone and also might annoy us if it flickers through the night.
This week we do not want to be bothered by flashing lights and insects interrupting our communication channels. We want to explore how sound-carrying light looks like, how living nature alters the light, and how this affects the tonal properties of the sound. We would look at light as a specific form of electromagnetic radiation and use software-defined radio technology to modulate and demodulate the streams of photons. We will experiment with loading information onto a visible carrier wave through amplitude, frequency, and phase modulation. Not only sound can be sent via light, but also poems and pictures. Sound to light to music. Words to light to poetry. Light to light to light.
The vast landscape of Triglavski narodni park offers the perfect dimensions for long-range light transmissions. A hike will take us to an elevated place so we can deploy our (hand-crank operated?) laser walkie-talkies. We will get within line of sight and then we will speak through the light.