Recently, a video representation based on dense trajectories has been shown to outperform other human action recognition methods on several benchmark datasets. In dense trajectories, points are sampled at uniform intervals in space and time and then tracked using a dense optical flow field. The uniform sampling does not discriminate objects of interest from the background or other objects. Consequently, a lot of information is accumulated, which actually may not be useful. Sometimes, this unwanted information may bias the learning process if its content is much larger than the information of the principal object(s) of interest. This can especially escalate when more and more data is accumulated due to an increase in the number of action classes or the computation of dense trajectories at different scales in space and time, as in the Spatio-Temporal Pyramidal approach. In contrast, we propose a technique that selects only a few dense trajectories and then generates a new set of trajectories termed 'ordered trajectories'. We evaluate our technique on the complex benchmark HMDB51, UCF50 and UCF101 datasets containing 50 or more action classes and observe improved performance in terms of recognition rates and removal of background clutter at a lower computational cost.
Dr. Oruganti Venkata Ramana Murthy and Goecke, R., “Ordered Trajectories for Large Scale Human Action Recognition”, in The IEEE International Conference on Computer Vision (ICCV) Workshops, 2013.