Identifying Dancers and Style from Motion Capture Data Using ResNet
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Identifying Dancers and Style from Motion Capture Data Using ResNet

Abstract

This work aims to apply advancements in deep learning for image classication to improvethe recognition of movement style in motion capture data. A RESNET architecture is used to classify individual dancers based on clips of their movement and to predict style based on clips of various motions in 7 dierent style categories -angry, childlike, depressed, proud, etc. Motion capture clips from trained dancers at George Mason University performing the same choreographic sequence several dierent times were used for a dancer identication task. A data set of actions performed with dierent labeled styles such as proud, depressed, angry, old, and childlike created by [39] was used for a style identication task. Results were compared using Quaternion, scaled positional coordinates, and Euler angle representations of the motion capture clips supplied to the network for learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View