Enhancing Hyperspectral Images via Diffusion Model and Group-Autoencoder Super-resolution Network

Authors: Zhaoyang Wang, Dongyang Li, Mingyang Zhang, Hao Luo, Maoguo Gong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both natural and remote sensing hyperspectral datasets demonstrate that the proposed method is superior to other state-of-the-art methods both visually and metrically.
Researcher Affiliation Collaboration Zhaoyang Wang1 2*, Dongyang Li2 3, Mingyang Zhang1 , Hao Luo2 3, Maoguo Gong1, 1Ministry of Education, Key Laboratory of Collaborative Intelligence Systems, Xidian University 2DAMO Academy, Alibaba Group, 310023, Hangzhou, China 3Hupan Lab, 310023, Hangzhou, China
Pseudocode Yes Algorithm 1: Testing process
Open Source Code No The paper does not include an unambiguous sentence stating that the authors are releasing their code or provide a direct link to a source-code repository for the described methodology.
Open Datasets Yes In our experiments, we used three publicly available datasets to validate the performance of our model. These datasets include two remote-sensing HSI datasets: Pavia Center (Pavia C) dataset and Chikusei dataset(Yokoya and Iwasaki 2016), and one natural image HSI dataset: Harvard dataset (Chakrabarti and Zickler 2011).
Dataset Splits No The paper mentions using specific datasets for training and evaluation but does not specify exact train/validation/test splits, sample counts for each split, or reference predefined splits with citations for reproducibility.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU models, CPU models, or cloud computing instance types.
Software Dependencies No The paper mentions using the 'Adam optimizer' and a 'pre-trained SR3 diffusion model' but does not provide specific version numbers for these or any other key software components (e.g., Python, PyTorch, CUDA versions) needed for reproducibility.
Experiment Setup Yes We used the Adam optimizer with β1 = 0.9 and β2 = 0.999 for training, with a batch size of 8 for the Harvard dataset and 4 for the Pavia C and Chikusei datasets. The learning rate was set to 1e 4 during GAE training and reduced to 1e 5 for the diffusion model. During the training process, we utilized a pre-trained SR3 diffusion model. In the GAE module, bands were divided into subgroups of size 16 for Pavia C and Chikusei datasets, and 8 for the Harvard dataset, with one-quarter overlap between subgroups.