Dynamic Brightness Adaptation for Robust Multi-modal Image Fusion

Authors: Yiming Sun, Bing Cao, Pengfei Zhu, Qinghua Hu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate that our method surpasses stateof-the-art methods in preserving multi-modal image information and visual fidelity, while exhibiting remarkable robustness across varying brightness levels.
Researcher Affiliation Academia Yiming Sun , Bing Cao , Pengfei Zhu and Qinghua Hu Tianjin University Tianjin Key Lab of Machine Learning Haihe Lab of ITAI Engineering Research Center of the Ministry of Education on Urban Intelligence and Digital Governance {sunyiming1895,caobing,zhupengfei,huqinghua}@tju.edu.cn
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available: https://github. com/Sun YM2020/BA-Fusion.
Open Datasets Yes We conducted experiments on two publicly available datasets: (M3FD [Liu et al., 2022] and LLVIP [Jia et al., 2021]).
Dataset Splits No M3FD: It contains 4, 200 infrared-visible image pairs captured by on-board cameras. We used 3, 900 pairs of images for training and the remaining 300 pairs for evaluation. LLVIP: The LLVIP dataset contains 15, 488 aligned infrared-visible image pairs, which is captured by the surveillance cameras in different street scenes. We trained the model with 12, 025 image pairs and evaluated 3, 463 image pairs. While training and evaluation splits are provided, a distinct validation set split from the dataset is not explicitly stated.
Hardware Specification Yes We performed experiments on a computing platform with four NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies No The paper mentions 'Adam Optimization' but does not provide specific version numbers for software, libraries, or frameworks used for implementation.
Experiment Setup Yes We used Adam Optimization to update the overall network parameters with the learning rate of 1.0 10 4. The training epoch is set to 60 and the batch size is 8.