Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging

Authors: Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Henghui Ding, Yulun Zhang, Radu Timofte, Luc V Gool

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation and real experiments are conducted. ... Tab. 1 compares the results of DAUHST and 16 SOTA methods... Fig. 1 plots the PSNR-FLOPS comparisons of DAUHST and SOTA unfolding methods.
Researcher Affiliation Academia 1 Shenzhen International Graduate School, Tsinghua University, 2 Shenzhen Institute of Future Media Technology, 3 Westlake University, 4 ETH Zürich, 5 University of Würzburg
Pseudocode No The paper includes architectural diagrams (e.g., Figure 2, Figure 3) but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code and models are publicly available at https://github.com/caiyuanhao1998/MST
Open Datasets Yes We adopt two datasets, i.e., CAVE [64] and KAIST [65] for simulation experiments. The CAVE dataset consists of 32 HSIs with spatial size 512 512. The KAIST dataset contains 30 HSIs of spatial size 2704 3376.
Dataset Splits No The paper states 'CAVE dataset is adopted as the training set while 10 scenes from the KAIST dataset are selected for testing.' but does not explicitly mention a separate validation dataset split.
Hardware Specification Yes All DAUHST models are trained with Adam [66] optimizer (β1 = 0.9 and β2 = 0.999) using Cosine Annealing scheme [67] for 300 epochs on an RTX 3090 GPU.
Software Dependencies No The paper mentions 'We implement DAUHST by Pytorch' but does not specify the version number for Pytorch or any other software dependencies.
Experiment Setup Yes All DAUHST models are trained with Adam [66] optimizer (β1 = 0.9 and β2 = 0.999) using Cosine Annealing scheme [67] for 300 epochs on an RTX 3090 GPU. The initial learning rate is 4 10 4. Patches with spatial sizes 256 256 and 660 660 are randomly cropped from the 3D HSI cubes with 28 channels as training samples for the simulation and real experiments. The shifting step d in the dispersion is set to 2. The batch size is 5.