Sonifying Animation: The Visual-Audioizer
A tool called a visual-audioizer reads, interpolates, and elucidates audio from visual input. Using non-objective techniques and animation principles, the animator/computer-musician can use drawing as both technique and practice, much like learning a musical instrument. Pre-recorded videos provide another alternative allowing animated pieces, coupled with audible output, to be experienced in a new perspective.
Graduate Researcher – MFA Thesis Project
Kyoung Swearingen, Assistant Professor of Design
Marc Ainger, Associate Professor, Music
Matt Lewis, Assistant Professor, Design
The visual-audioizer is a patch created in Max in which traditional animation techniques, in tandem with basic computer vision tracking methods, can be used as a tool to allow the visual time-based media artist to produce audio and eventually, music. Using this tool with the animated form provides real-time feedback within a dynamic locale constrained to the software and allows the user to hear their visual creations within a sonic setting.
For the user unfamiliar with animation techniques and computer music, an experiential media system is one in which animation, audiovisuals, and exploration are considered synonymous. To the animator, this is an alternative method/ consideration to letting the implicit temporal-knowledge of motion to apply this system as a means of creating audio. For the computer musician, this is a new way to learn about animation and the role of animation techniques as a musical instrument. Using the visual-audioizer, animators and computer musicians will find new ways to experience and create their audiovisual media through means of drawing and interacting with the tool itself.
Project Presented At:
ACCAD 2020 Playtesting Day
The International Conference on New Interfaces for Musical Expression (NIME) 2020