Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond

Authors: Zhu Liu, Jinyuan Liu, Guanyao Wu, Long Ma, Xin Fan, Risheng Liu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of our method, which not only produces visually pleasant fused results but also realizes significant promotion for detection and segmentation than the state-of-the-art approaches. and 3 Experiments, 3.2 Evaluation in Multi-modality Image Fusion, 3.3 Comparing with SOTA in Object Detection, 3.4 Extension to Multi-modality Segmentation, 3.5 Ablation Studies.
Researcher Affiliation Academia Zhu Liu1 , Jinyuan Liu2 , Guanyao Wu1 , Long Ma1 , Xin Fan3 and Risheng Liu3 1School of Software Technology, Dalian University of Technology, China 2School of Mechanical Engineering, Dalian University of Technology, China 3International School of Information Science Engineering, Dalian University of Technology, China
Pseudocode Yes Algorithm 1 Dynamic Aggregated Solution.
Open Source Code No The paper does not provide any explicit statements about open-sourcing the code or links to a code repository.
Open Datasets Yes We implemented the experiments on five representative datasets (TNO, Road Scene [Xu et al., 2022] and M3FD for visual evaluations, M3FD [Liu et al., 2022a] and Multi Spectral [Takumi et al., 2017] for detection, and MFNet [Ha et al., 2017] for segmentation).
Dataset Splits No Φ and ϕ are the objectives on the validation and training datasets respectively. However, it does not provide specific split percentages, sample counts, or detailed splitting methodology.
Hardware Specification Yes All experiments are implemented with the PyTorch framework and on an NVIDIA Tesla V100 GPU.
Software Dependencies No All experiments are implemented with the Py Torch framework and on an NVIDIA Tesla V100 GPU. and SGD optimizer is utilized to update the parameters of each module. No version numbers provided.
Experiment Setup Yes SGD optimizer is utilized to update the parameters of each module. As for the perception task, the initialized learning rate is 2e-4 and will be decreased to 2e-6 with a multi-step decay strategy. As for the optimization of fusion, we utilize the same learning rate.