"cowbird" is a 3d motion capture-driven intermedia dance performance created during a Creative Artist's Residency at the B2 Center for Media, Arts, and Performance within the Atlas Institute of the University of Colorado, Boulder. The residency occurred from August 21st to September 4th, 2022, with live performances on September 3rd and 4th. These were the first live performances in the Black Box Experimental Studio since the onset of the COVID-19 pandemic.
cowbird story (loosely fed into VQGAN/CLIP to create "cowbird" imagery):
Red Center: A pair of red-winged black birds find a tree and build a nest. They lay a clutch of eggs.
The “cowbird” is a genus of birds that practices brood parasitism. A mother cowbird sneaks her egg into the nest of a host species that unwittingly raise the cowbird chick. This poses identity issues for adolescent cowbirds: how do they learn cowbird songs, foraging habits, migrating, and mating behaviors? How do they learn to become a cowbird?
Research by Mark Hauber had observed that fledgling cowbirds often sneak out of their host nests at night and seek out environments where adult cowbirds will likely be encountered. His research also discovered that cowbird “chatter” acts as a type of innate password that activates a special region in the auditory system of the cowbird brain. Once activated, the young bird’s exploratory behavior increases, and they are drawn into a flock of adults by the attraction to this “chatter.” Females often make this chatter to mark territory or as a call toward males that they find attractive. For more information, please see the following:
“cowbird” examines this distributed bird-rearing complex through a dance-triggered multimedia performance. Using motion capture technology, 3d space is segmented into nine 3x3x3 grids of virtual “pillars” that provide up to 243 unique trigger points for a dancer to interact with. When a dancer intersects a trigger point, a unique sound and video sample are played. Each performance results in a subtly different audio/visual experience saved as a unique animation in real time.
The imagery was created using VQGAN+CLIP, a set of neural network models that can create images from a text prompt. For more information on the software used to create this imagery, please see the following: https://pythonawesome.com/a-katherine-crowson-vqgan-clip-derived-google-colab-notebook/
I would like to thank my collaborators, elle hong and Katarina Lott. Without them, literally, nothing would happen.
Sasha de Koninick for her visionary costume design.
Anna Pillot and Caroline Butcher for their input and participation in previous versions of this project.
Meg Madorin for allowing me into her dance class.
Michelle Ellsworth for taking me on in an independent study from which this project emerged.
Ondine Geary and Steven Frost for giving me the opportunity to put on this performance.
Special thanks to Gary McCrumb for his help in navigating the BlackBox and for his hospitality, without which this project would not have happened.
Special thanks to Jeff Merkle and Sean Winters for the amazing work and help to get me online with the ambisonic sound system.
I also want to think all my IAWP cohorts past and present, Mark Amerika, Lori Emerson, Michael Theodore, and Joel Swanson for their support, conversation, and inspiration.