Spatial-Spectral Transformer for Hyperspectral Image Denoising
Authors: Miaoyu Li, Ying Fu, Yulun Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our proposed method outperforms the state-of-the-art HSI denoising methods in quantitative quality and visual results. |
| Researcher Affiliation | Academia | Miaoyu Li1, Ying Fu1*, Yulun Zhang2 1Beijing Institute of Technology 2ETH Z urich |
| Pseudocode | No | No structured pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | Yes | The code is released at https://github.com/Myu Li/SST. |
| Open Datasets | Yes | We evaluate our method mainly on ICVL (Arad and Ben-Shahar 2016) dataset. |
| Dataset Splits | No | The paper only specifies a training and testing split (100 HSIs for training and 50 HSIs for testing) but does not explicitly mention a separate validation set split or cross-validation details for reproducibility. |
| Hardware Specification | Yes | Competing deep learning methods (HSID-CNN, QRNN3D, and T3SC) and our proposed Transformer are implemented with Py Torch and run with a Ge Force RTX 3090. Traditional methods, including BM4D, LLRT, TSLRLN, and NG-Meet, are implemented with Matlab and run with an Intel Core i910850K CPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Matlab' as implementation environments but does not provide specific version numbers for these or any other software libraries or dependencies. |
| Experiment Setup | Yes | We use Adam (Kingma and Ba 2014) to optimize the network with parameters initialized by Xavier initialization (Glorot and Bengio 2010). The batch size is set to 8 with 100 epochs of training. The learning rate is set to 1 10 4 and is divided by 10 after 60 epoch. |