“Acoustic Injection” is a shorthand phrase that refers to a digital media transformation process that I developed in 2003. The idea was to change selected attributes of a video stream based on harmonic profile of an audio input. This means that every frame of video is modified by filtering live sounds into thousands of discrete musical tones, and associating those tones with particular hues, saturation levels and luminance factors for all pixels in that given frame. This effect can produce some strikingly psychedelic imagery when the camera is directed at a subject that is creating the sound — for example, a person talking: when certain pitches are heard in the voice, a ruddy complexion might suddenly turn green. When subjected to video feedback, there seems to be no limit to the variety of images that can be created.
This set includes 173 images that were captured during a three minute interval on March 8, 2008. The total number of images that were captured during those three minutes was 2,823. This means that each image took roughly 0.064 seconds to produce, which averages to about 16 images per second. The method of generation involved video feedback between a high-definition camera and a 46” LCD display monitor. Both the camera and display monitor were connected to a computer running software designed to transform the signal that passes between the electronic equipment connected to it. The algorithm used is pixel based, meaning that no logic exists in the program to recognize or generate the abstract forms that are seen here.