UC San Diego
Continuous Human Pose Estimation Using Long Short-Term Memory and Particle Filter
- Author(s): Gong, Chenghao
- Advisor(s): Gilja, Vikash
- et al.
Estimating human pose in a continuous time series has many practical applications. For example, imagine that some time in the future robot would like to interact with human beings, for that robot to meaningfully interact with a human it needs to interpret and anticipate human movements and gestures. Acquiring continuous human pose estimates can also inform specific applications like brain-machine interface; specifically, we can use accounts of human pose data across time to study the relationship between neural signals and human pose. In this thesis, we will focus our work on the continuous human pose estimation in the clinical environment.
There are many existing methods for estimating human pose from camera image, and many of them employ deep learning and convolutional neural network (CNN) architecture, which are widely used in computer vision. However, after estimating possible human poses from a single image frame, might we be able to use the statistical regularity of human movement to improve pose estimation? In this work we demonstrate that by modeling this regularity across time pose estimation can be improved. We demonstrate this by applying a post-processing method to confidence maps of pose generated using existing computer vision methods applied to each frame. Our post-processing method models movement using a long short-term memory (LSTM) network and a particle filter based framework for estimation.