Bidirectional Dilation Transformer for Multispectral and Hyperspectral Image Fusion

Authors: Shangqi Deng, Liang-Jian Deng, Xiao Wu, Ran Ran, Rui Wen

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments on two commonly used datasets, CAVE and Harvard, demonstrate the superiority of BDT both visually and quantitatively. Furthermore, the related code will be available at the Git Hub page of the authors.
Researcher Affiliation Academia University of Electronic Science and Technology of China shangqideng0124@gmail.com, liangjian.deng@uestc.edu.cn, wxwsx1997@gmail.com, ranran@std.uestc.edu.cn wenrui202102@163.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes the methods in text and mathematical formulas.
Open Source Code Yes Furthermore, the related code will be available at the Git Hub page of the authors.
Open Datasets Yes Datasets: To test the performance of our model, we conduct experiments on the CAVE 2 and Harvard3 datasets. CAVE dataset contains 32 HSIs, including 31 spectral bands ranging from 400 nm to 700 nm at 10 nm steps. We randomly select 20 images for training the network, and the remaining 11 images constitute the testing dataset. In addition, Harvard dataset contains 77 HSIs of indoor and outdoor scenes, and each HSI has a size of 1392 1040 31, covering the spectral range from 420 nm to 720 nm. We crop the upper left part (1000 1000) of the 20 Harvard images, 10 of which have been used for training, and the rest has been exploited for testing.
Dataset Splits Yes The pairs and their related GTs are randomly divided into training data (80%) and validation data (20%).
Hardware Specification Yes The proposed network implements in Py Torch 1.11.0 and Python 3.7.0 using Adam W optimizer with a learning rate of 0.0001 to minimize Ltotal by 2000 epochs and Linux operating system with a NVIDIA RTX3090 GPU.
Software Dependencies Yes The proposed network implements in Py Torch 1.11.0 and Python 3.7.0 using Adam W optimizer with a learning rate of 0.0001 to minimize Ltotal by 2000 epochs and Linux operating system with a NVIDIA RTX3090 GPU.
Experiment Setup Yes Implementation Details: The proposed network implements in Py Torch 1.11.0 and Python 3.7.0 using Adam W optimizer with a learning rate of 0.0001 to minimize Ltotal by 2000 epochs and Linux operating system with a NVIDIA RTX3090 GPU.