S2CycleDiff: Spatial-Spectral-Bilateral Cycle-Diffusion Framework for Hyperspectral Image Super-resolution

Authors: Jiahui Qu, Jie He, Wenqian Dong, Jingyu Zhao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments have been conducted on three widely used datasets to demonstrate the superiority of the proposed method over state-of-the-art HISR methods. The code is available at https://github.com/Jiahuiqu/S2Cycle Diff.
Researcher Affiliation Academia State Key Laboratory of Integrated Service Network, Xidian University, Xi an 710071, China jhqu@xidian.edu.cn, jiehe@stu.xidian.edu.cn, wqdong@xidian.edu.cn, jingyuzhao@stu.xidian.edu.cn
Pseudocode No The paper describes the architecture and processes but does not include formal pseudocode or an algorithm block.
Open Source Code Yes The code is available at https://github.com/Jiahuiqu/S2Cycle Diff.
Open Datasets Yes To illustrate the effectiveness of the proposed method, we conduct the comparative experiments with several competing methods on three public datasets, namely CAVE, Pavia Center, and Chikusei. We use the Wald s protocol (Wald, Ranchin, and Mangolini 1997) to generate pairs of Lr HSI and Hr MSI for training.
Dataset Splits No The paper specifies training and testing sets, but does not explicitly mention a separate validation set or its split. For instance, 'The CAVE dataset consists of 32 images with a size of 512 512 31, where 22 images are selected as the training set, while the remaining 10 images are allocated to the test set.' This outlines train and test, but not validation.
Hardware Specification Yes We conducted the experiments with the Py Torch framework and trained on two NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions 'Py Torch framework' and 'Adam optimizer' but does not specify their version numbers, which is necessary for reproducible software dependencies.
Experiment Setup Yes The experiments were conducted with a batch size of 8 and 100k iterations on all datasets. The Adam optimizer is employed for the optimization process, with a maximum learning rate set at 0.0001.The time step T is set to 2000, and the hyperparameter sequence {β1, β2, ..., βn} was defined with uniform growth ranging from 0 to 0.02.