Eminence of learning algorithm applied for computer vision tasks depends on the features engineered from image. It's premise that different representations can interweave and ensnare most of the elucidative genes that are responsible for variations in images, be it rigid, affine or projective. Hence researches give at most attention in hand-engineering features that capture these variations. But problem is, we need subtle domain knowledge to do that. Thereby making researchers elude epitome of representations. Hence learning algorithms never reach their full potential. In recent times there has been a shift from hand-crafting features to representation learning. The resulting features are not only optimal but also generic as in they can be used as off the shelf features for visual recognition tasks. In this paper we design and experiment with a basic deep convolution neural nets for learning generic vision features with an variant of convolving kernels. They operate by giving importance to individual uncorrelated color channels in a color model by convolving each channel with channel specific kernels. We were able to achieve considerable improvement in performance even when using smaller dataset.
K. Nithin.D and Dr. Bhagavathi Sivakumar P., “Learning of Generic Vision Features Using Deep CNN”, in 2015 Fifth International Conference on Advances in Computing and Communications (ICACC), Kochi, 2015.