Learning of Generic Vision Features Using Deep CNN
Publication Type:Conference Paper
Source:2015 Fifth International Conference on Advances in Computing and Communications (ICACC), IEEE, Kochi (2015)
Keywords:basic deep convolution neural nets, channel specific kernels, color channels, Computer vision, computer vision tasks, Convolution, Convolution Neural networks, convolving kernels, deep CNN, Deep learning, elucidative genes, Feature extraction, Feature learning, generic vision features, hand-crafting features, hand-engineering features, image classification, Kernel, learning (artificial intelligence), Learning algorithms, linearity, neural nets, Neural networks, Neurons, representation learning, Supervised learning, Training, visual recognition tasks
Eminence of learning algorithm applied for computer vision tasks depends on the features engineered from image. It's premise that different representations can interweave and ensnare most of the elucidative genes that are responsible for variations in images, be it rigid, affine or projective. Hence researches give at most attention in hand-engineering features that capture these variations. But problem is, we need subtle domain knowledge to do that. Thereby making researchers elude epitome of representations. Hence learning algorithms never reach their full potential. In recent times there has been a shift from hand-crafting features to representation learning. The resulting features are not only optimal but also generic as in they can be used as off the shelf features for visual recognition tasks. In this paper we design and experiment with a basic deep convolution neural nets for learning generic vision features with an variant of convolving kernels. They operate by giving importance to individual uncorrelated color channels in a color model by convolving each channel with channel specific kernels. We were able to achieve considerable improvement in performance even when using smaller dataset.