Social Action Recognition in Egocentric RGB-D Videos
-
Advised by: Prof. Chieh-Chih (Bob) Wang
Date: Mar. 2014 - June 2014
Wearable computing has gained significant attention in recent years, with first-person vision emerging as a critical machine perception technique for enabling advanced wearable applications. In our work, we address the challenge of egocentric social action recognition using an RGB-D structured light camera.
Our proposed method achieved a 94% recognition accuracy on a dataset of 65 RGB-D videos. The action recognition process consists of two main steps. First, we extract 3D trajectories of moving parts corresponding to five types of social actions captured from an egocentric RGB-D camera. Next, we reduce the dimensionality of the extracted data to facilitate training and testing, and apply a multi-classification tree for classification.
Action speaks louder than words but not nearly as often. - Mark Twain