Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter
oleh: Yudan Liu, Xiaomin Yang, Rongzhu Zhang, Marcelo Keese Albertini, Turgay Celik, Gwanggil Jeon
Format: | Article |
---|---|
Diterbitkan: | MDPI AG 2020-01-01 |
Deskripsi
Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method’s average metrics are: mutual information (<inline-formula> <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>I</mi> </mrow> </semantics> </math> </inline-formula>)—5.3377, feature mutual information (<inline-formula> <math display="inline"> <semantics> <mrow> <mi>F</mi> <mi>M</mi> <mi>I</mi> </mrow> </semantics> </math> </inline-formula>)—0.5600, normalized weighted edge preservation value (<inline-formula> <math display="inline"> <semantics> <msup> <mi>Q</mi> <mrow> <mi>A</mi> <mi>B</mi> <mo>/</mo> <mi>F</mi> </mrow> </msup> </semantics> </math> </inline-formula>)—0.6978 and nonlinear correlation information entropy (<inline-formula> <math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>C</mi> <mi>I</mi> <mi>E</mi> </mrow> </semantics> </math> </inline-formula>)—0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.