Publication Type:

Conference Paper

Source:

2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA) (2014)

Keywords:

action classes, action recognition datasets, bag-of-words framework, computational modeling, Hidden Markov models, Histograms, human action recognition, Object recognition, Support vector machines, temporal information, temporal occurrence, Training, Trajectory, unconstrained conditions, Vectors, video input, Video signal processing

Abstract:

Human action recognition from video input has seen much interest over the last decade. In recent years, the trend is clearly towards action recognition in real-world, unconstrained conditions (i.e. not acted) with an ever growing number of action classes. Much of the work so far has used single frames or sequences of frames where each frame was treated individually. This paper investigates the contribution that temporal information can make to human action recognition in the context of a large number of action classes. The key contributions are: (i) We propose a complementary information channel to the Bag-of- Words framework that models the temporal occurrence of the local information in videos. (ii) We investigate the influence of sensible local information whose temporal occurrence is more vital than any local information. The experimental validation on action recognition datasets with the largest number of classes to date shows the effectiveness of the proposed approach.

Cite this Research Publication

Dr. Oruganti Venkata Ramana Murthy and Goecke, R., “The Influence of Temporal Information on Human Action Recognition with Large Number of Classes”, in 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2014.

207
PROGRAMS
OFFERED
6
AMRITA
CAMPUSES
15
CONSTITUENT
SCHOOLS
A
GRADE BY
NAAC, MHRD
8th
RANK(INDIA):
NIRF 2018
150+
INTERNATIONAL
PARTNERS