Hyperspectral Image Reconstruction via Combinatorial Embedding of Cross-Channel Spatio-Spectral Clues

Authors: Xingxing Yang, Jie Chen, Zaifeng Yang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive quantitative and qualitative experiments show that our method (dubbed CESST) achieves SOTA performance.
Researcher Affiliation Academia Xingxing Yang1, Jie Chen1*, Zaifeng Yang2 1Department of Computer Science, Hong Kong Baptist University 2Institute of High Performance Computing, Agency for Science Technology and Research
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code for this project is at: https://github.com/Alex Yangxx/CESST.
Open Datasets Yes We adopt two datasets: NTIRE2022 HSI dataset (Arad et al. 2022) and ICVL HSI dataset (Arad and Ben Shahar 2016), to evaluate the performance of our CESST.
Dataset Splits Yes In NTIRE2022 HSI dataset, there are 950 available RGB-HSI pairs, including 900 for training and 50 for validation. Since it contains 18 images within different resolutions, we only use the left 183 image pairs (147 pairs for training and 36 pairs for testing).
Hardware Specification Yes The whole training time of the proposed CESST is about 40 hours with a single NVIDIA Ampere A100-40G.
Software Dependencies No The paper mentions 'We implement our CESST with Pytorch' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We implement our CESST with Pytorch. All the models are trained with Adam (Kingma and Ba 2014) optimizer (β1 = 0.9 and β2 = 0.999) for 300 epochs. The learning rate is initialized as 0.0002, and the Cosine Annealing scheme is adopted. During the training phase, RGB-HSI pairs are first cropped into 128 128 and the input RGB images are linearly rescaled to [0, 1]. We employ random rotation and flipping to augment training data.