Test-Time Dynamic Image Fusion

Authors: Bing Cao, Yinan Xia, Yi Ding, Changqing Zhang, Qinghua Hu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and discussions with in-depth analysis on multiple benchmarks confirm our findings and superiority. Our code is available at https://github.com/Yinan-Xia/TTD.
Researcher Affiliation Academia Bing Cao1,2 Yinan Xia1 Yi Ding1 Changqing Zhang1,2 Qinghua Hu1,2 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Tianjin Key Lab of Machine Learning, Tianjin, China
Pseudocode Yes Algorithm 1 algorithm of dynamic fusion strategy
Open Source Code Yes Our code is available at https://github.com/Yinan-Xia/TTD.
Open Datasets Yes Datasets. We evaluate our proposed method on four image fusion tasks: Visible-Infrared Fusion (VIF), Medical Image Fusion (MIF), Multi-Exposure Fusion (MEF), and Multi-Focus Fusion (MFF). VIF: For VIF tasks, we conduct experiments on two datasets: LLVIP [38] and MSRS [17]. MIF: We conduct experiments on the Harvard Medical Image Dataset, following the test setting in [29]. MEF: Following the setting in [24], we verified the performance of our method on MEFB [39] dataset. MFF: For the MFF task, we evaluate our method on MFI-WHU datasets [40], following the test protocol in [24].
Dataset Splits No The paper mentions using specific datasets and describes evaluation settings, but it does not provide explicit training, validation, and test split percentages or sample counts for these datasets within the paper. It refers to 'test set' and 'test setting' for evaluation.
Hardware Specification Yes Our experiments are conducted on Huawei Atlas 800 Training Server with CANN and NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions "CANN" as part of the hardware setup ("Huawei Atlas 800 Training Server with CANN"), but does not specify a version number for CANN or any other software dependencies like programming languages or libraries used.
Experiment Setup No The paper describes its approach as a "test-time adaption approach" that "does not require additional training, fine-tuning, and extra parameters." While it applies its method to pre-trained baselines, it does not specify the hyperparameters or training configurations (e.g., learning rate, batch size, optimizer) used to train these baselines or any other system-level settings for the experiments.