Back close

Visible-Infrared image fusion using DTCWT and Adaptive combined clustered dictionary

Publication Type : Journal

Publisher : Elsevier-Infrared Physics and Technology

Source : Elsevier-Infrared Physics and Technology, Vol. 93, pp. 300-309.

Url : https://www.sciencedirect.com/science/article/abs/pii/S1350449518301361

Keywords : mage fusion; DTCWT; Dictionary learning; Sparse representation

Campus : Chennai

School : School of Engineering

Center : Amrita Innovation & Research

Department : Electronics and Communication

Verified : Yes

Year : 2018

Abstract : Getting the daylight information and the hidden target information in a single image is an active research topic in the domain of computer vision and image processing. In this paper, an image fusion technique, named as DTCWT-ACCD is proposed for the fusion of visible and infrared images. Firstly, an adaptive dictionary is constructed by combining several sub-dictionaries, learned from the clustered patches of source images. Then, the source images are decomposed by DTCWT to obtain the low frequency sub bands and high frequency sub bands. The low frequency sub bands are merged using a novel sparse based fusion rule while high frequency sub bands are combined using the maximum absolute value of coefficients with consistency verification (CV) check. Finally, the fused image is reconstructed by applying inverse DTCWT. The DTCWT-ACCD approach is experimentally tested with both subjective and objective evaluations to verify its competency. The results indicate that the DTCWT-ACCD approach is superior to conventional MST based methods and state-of-the-art sparse representation (SR) based methods.

Cite this Research Publication : Aishwarya N and Bennila Thangammal C, “Visible-Infrared image fusion using DTCWT and Adaptive combined clustered dictionary”, Elsevier-Infrared Physics and Technology, Vol. 93, pp. 300-309.

Admissions Apply Now