Skip to main content
Download PDF
- Main
Learning Interaction Grammar for Action Recognition
- Liu, Tengyu
- Advisor(s): Zhu, Song-Chun
Abstract
Video action recognition has been in the center of the stage since its introduction in 2004 [SLC04]. During the past 15 years, countless methods had been proposed to understand what human is doing in a video clip. While some works infer action label directly from pixel information, some other works propose to learn multi-level hierarchical structure that composes actions. In this work, we propose the learning of a two-layer grammar model for action recognition that is based on human object interaction. To evaluate the idea, we proposed an interaction grammar for action recognition on video. The model is evaluated by simulated annealing with MCMC and by deep learning.
Main Content
For improved accessibility of PDF content, download the file to your device.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Preparing document for printing…
0%