Concept:
Learn To Burn is a touch-triggered sound and generative video installation that invites the user to meditate on the nature of bad decisions on both personal and global scales. The output of the system subtly encourages the user to interact with it in a calm and meditative manner. Touching it with multiple fingers or moving across the trackpad very quickly results in more chaotic sound and more and more overexposure to the generated imagery. A calm, meditative, repetitive touching motion maximizes the richness of the audio-visual output.
Methodology/Materiality:
A touchpad is logically split into a 3x3 grid of cells where a single touch in any cell triggers a particular sound sample and a message to start iterating a particular generative video model. Multiple touches modulate the sound using the xy coordinate output from the touchpad and the image outputs from the generative video models are overlaid such that they create a burn effect. Every interaction with the piece creates a unique audio-visual experience based on the viewer's input. In addition, the generative video models create output that is qualitatively similar but never exactly the same. The video models were created using Pix2Pix, an application that uses a conditional adversarial neural network to correlate the relationship between a pair of images, A and B. Pix2Pix has generally been used to colorize photos by training a model on black and white and color images of the same scene or to correlate contour line drawings with color images. In each case, once a model has been trained, a unique input image 'A' that the model has never seen can be inputted into it to infer what the 'B' image should look like. I trained my models by creating pairs of images of subsequent frames in short video sequences. Once the model has been trained it is given a seed image and then the model predicts what the next frame should be which is put back into the model in a feedback loop to create a generative video. These models have different properties; in some cases models converge to a sequence of images that doesn't change much, in other cases they are divergent, never settling down into any fixed pattern. Sound Sources: The source of the sound samples focuses on the act of breathing. They include a sample of me throat singing at the Ando Gallery at the Art Institute of Chicago, sine waves that are derived from computing the 3d distance between the hands of a Tai Chi master performing the 24-step form, and field recordings of birds close to Lake Michigan.
Video Sources:
The video sources are focused on burning, deflagration, detonation, fluid dynamics, and mixing. Two of the models are trained on recently released footage of two particular atomic weapons tests called Operation Hardtack-1, Nutmeg, and Juniper. Three of the models are trained on animations I made from data from an astrophysics simulation code. One of the models was trained on a video of a candle being lit and then blown out in the dark. Another model is trained on a video of me smoking. And the remaining two models are a time-lapse of sunrise on Lake Michigan and moonrise on Lake Michigan from the same perspective. Check out other videos related to this project here: https://vimeo.com/album/5626674