A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration

Authors: Theo Bodrito, Alexandre Zouaoui, Jocelyn Chanussot, Julien Mairal

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show on various denoising benchmarks that our method is computationally efficient and significantly outperforms the state of the art.We experimentally evaluate our HSI model on standard denoising benchmarks, showing a significant improvement over the state of the art (including deep learning models and more traditional baselines), while being computationally very efficient at test time.
Researcher Affiliation Academia Théo Bodrito , Alexandre Zouaoui , Jocelyn Chanussot, and Julien Mairal Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France firstname.lastname@inria.fr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/inria-thoth/T3SC.
Open Datasets Yes We evaluate our approach on two datasets with significantly different properties. ICVL [4] consists of 204 images of size 1392 1300 with 31 bands. ... Washington DC Mall is perhaps the most widely used dataset2 for HSI denoising and consists of a high-quality image of size 1280 307 with 191 bands. ... Specific experiments were also conducted with the datasets APEX [28], Pavia3, Urban[58] and CAVE [64], which appear in the supplementary material.
Dataset Splits No We used 100 images for training and 50 for testing as in [62] but with a different train/test split ensuring that similar images e.g., picture from the same scene are not used twice. ... Following [54], we split the image into two sub-images of size 600 307 and 480 307 for training and one sub-image of size 200 200 for testing. The paper does not explicitly state a validation split.
Hardware Specification Yes SMDS, QRNN3D and T3SC are using a V100 GPU; BM4D, GLF, LLRT and NGMeet are using an Intel(R) Xeon(R) CPU E5-1630 v4 @ 3.70GHz.
Software Dependencies No The paper mentions 'standard deep learning frameworks' but does not provide specific software names with version numbers for reproducibility.
Experiment Setup Yes We trained our network by minimizing the MSE between the groundtruth and restored images. For ICVL, we follow the training procedure described in [62]: we first center crop training images to size 1024 1024, then we extract patches of size 64 64 at scales 1:1, 1:2, and 1:4, with stride 64, 32 and 32 respectively. ... Basic data augmentation schemes such as 90 rotations and vertical/horizontal flipping are performed. ... the number of unrolled iterations chosen for the first and second layers are 12 and 5 respectively.