Back close

Deep GoogLeNet Features for Visual Object Tracking

Publication Type : Journal Article

Thematic Areas : Center for Computational Engineering and Networking (CEN)

Source : 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS)

Url : https://ieeexplore.ieee.org/abstract/document/8721317

Campus : Coimbatore

School : School of Engineering

Verified : No

Year : 2018

Abstract : Convolutional Neural Network (CNN) has recently become very popular in visual object tracking due to their strong feature representation capabilities. Almost all of the CNN based trackers currently use the features extracted from shallow convolutional layers of VGGNet architecture. This paper presents an investigation of the impact of deep convolutional layer features in an object tracking framework. In this study, we demonstrate for the first time, the viability of features extracted from deep layers of GoogLeNet CNN architecture for the purpose of object tracking. We integrated GoogLeNet features in a discriminative correlation filter based tracking framework. Our experimental results show that the GoogLeNet features provides significant computational advantages over the conventionally used VGGNet features, without much compromise on the tracking performance. It was observed that features obtained from inception modules of GoogLeNet have high depths. Further, Principal Component Analysis (PCA) was employed to reduce the dimensionality of the extracted features. This greatly reduces the computational cost and thus improve the speed of the tracking process. Extensive evaluation have been performed on three benchmark datasets: OTB, ALOV300++ and VOT2016 datasets and its performances are measured in terms of metrics like F-score, One Pass Evaluation, robustness and accuracy.

Cite this Research Publication : P. Aswathy, Siddhartha Deepak Mishra "Deep GoogLeNet Features for Visual Object Tracking", 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS)

Admissions Apply Now