Share + 

DH Fair Discussion Panel

Wed Apr 21, 2021 3:00 PM - 4:00 PM
Online
A panel discussion with Timothy R. Tangherlini and Lisa Wymore, moderated by Claudia von Vacano Dancing in the Fire: Toward a Choreographic Search Engine Timothy R. Tangherlini, Professor, Department. of Scandinavian, University of California, Berkeley Collaborative work with Peter M. Broadwell (Stanford Library) Critics have long noted the strong visual aspects of K-pop, with the videos for newly released songs garnering millions of hits in a very short time span. A key feature of many Kpop videos is the dancing. Although many of the official videos are not solely dance focused, incorporating aspects of visual storytelling, nearly all of Kpop videos include some form of dance. In addition to the "main" video for a Kpop release, the release of a dance video, or a dance rehearsal video, focusing exclusively on the dances has become common practice. These videos allow fans to learn and practice the dance, thereby increasing the kinesthetic connection between fans and their idols. At the same time, it affords an opportunity to explore the "dance vocabulary" of Kpop dances. While there are well-known Kpop choreographers who work with the Kpop idols to create their dances, there is little documentation of these dances beyond the dance videos themselves. In our work, we develop a series of methods for (a) identifying dance sequences in Kpop videos, irrespective of whether they are dance videos (b) develop a series of classifiers for the navigation of a large scale Kpop video corpus and (c) apply deep learning methods to identify dancers and their body positions. Taken together, these approaches pave the way for the development of a macroscope for the study of Kpop videos, allowing researchers to identify patterns in the Kpop space, explore dynamic change in features such as color space, or interrogate the differences in visual representations of male and female performers at an aggregate scale. Importantly, as pose estimation has become more accurate, these methods allow us to begin the process of inferring the dance vocabulary of Kpop and start the process of tracing transcultural choreographic flows. What Do Computers Know about Making Dances? Lisa Wymore, Professor, Department of Theater, Dance, and Performance Studies, University of California, Berkeley Dance makers can choose to imbue embodied knowledge into our machines through a variety of methods from motion capture, to voice detection, to image recognition, to motion tracking, etc. What happens when we ask our computers to co-create a piece of choreography with this embodied information? Can we find innovative and unexpected modes of expression that would have not otherwise occurred if the computer or the choreographer had worked alone? For this presentation I will be showing examples from my work entitled Endless Gestures of Goodwill (March 2015), which is a dance film derived from a cache of over 250 video files of dance movements and gestures. The gestures were created specifically with a variety of compatible input and output poses. The video files were then coded and run through a random generating algorithm to create an endless dance series that appears seamless without any sudden or jerky transitions. Ideally, the piece can run indefinitely, as if the computer is creating an endless dance. The piece was designed to be viewed within a museum setting, rather than viewed in a theater. To add to the feeling of collaborating with the computer on the dance, a camera hanging from the ceiling of the museum captures the audience members' proximity to the screens. From this data, the film either slows down or speeds up depending on the spatial position of the viewers. This means that the exhibit has a lively interplay between the gestures being projected in the film and the movement of the audience members in real time within the museum space. In thinking about this piece again, I am wondering about the possibility of creating larger caches of recorded gestures utilizing cloud-based technology and using AI deep learning to speed up the detection of compatible dance gestures within very large data sets. When would this data become a dance and would the computer know if it had created one?