DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion

Authors: Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang, Pengfei Li

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile surpass state-of-the-art (SOTA) approaches.
Researcher Affiliation Collaboration 1School of Mathematics and Statistics, Xi an Jiaotong University, China 2Hikvision, China
Pseudocode No The paper does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing code or a link to a code repository for the described methodology.
Open Datasets Yes Our experiments are conducted on three datasets, including TNO [Toet and Hogervorst, 2012], NIR [Brown and S usstrunk, 2011] and FLIR (available at https://github.com/jiayi-ma/Road Scene).
Dataset Splits Yes In our experiment, we divide them into training, validation, and test sets. Table 2 shows the numbers of image pairs, illumination and scene information of the datasets. We randomly selected 180 pairs of images in the FLIR dataset as training samples. Before training, all images are transformed into grayscale. At the same time, we center-crop them with 128 128 pixels.
Hardware Specification Yes All experiments were conducted with Pytorch on a computer with Intel Core i7-9750H CPU@2.60GHz and RTX2070 GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes The tuning parameters in loss function are empirically set as follows: α1 = 0.05, α2 = 2, α3 = 2, α4 = 10 and λ = 5. In training phase, the network is optimized by Adam over 120 epochs with a batch size of 24. As for learning rate, we set it to 10^-3 and decrease it by 10 times every 40 epochs.