Binarized Spectral Compressive Imaging
Authors: Yuanhao Cai, Yuxin Zheng, Jing Lin, Xin Yuan, Yulun Zhang, Haoqian Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 ExperimentComprehensive quantitative and qualitative experiments manifest that our proposed Bi SRNet outperforms state-of-the-art binarization algorithms. |
| Researcher Affiliation | Academia | Yuanhao Cai 1, Yuxin Zheng 1, Jing Lin 1, Xin Yuan 2, Yulun Zhang 3, , Haoqian Wang 1, 1 Tsinghua University, 2 Westlake University, 3 ETH Zürich |
| Pseudocode | No | The paper provides architectural diagrams (e.g., Fig. 2) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and models are publicly available at https://github.com/caiyuanhao1998/Bi SCI |
| Open Datasets | Yes | Two simulation datasets, CAVE [66] and KAIST [65], are adopted. The CAVE dataset provides 32 HSIs with a spatial size of 512 512. The KAIST dataset includes 30 HSIs at a spatial size of 2704 3376. We use CAVE for training and select 10 scenes from KAIST for testing. |
| Dataset Splits | No | The paper mentions using CAVE for training and KAIST for testing, and describes how training samples are generated (patches, data augmentation). However, it does not explicitly state a separate validation set or provide details on validation splits. |
| Hardware Specification | Yes | We train Bi SRNet for 300 epochs on a single RTX 2080 GPU. |
| Software Dependencies | No | The paper states, 'The proposed Bi SRNet is implemented by Py Torch [67]', but it does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use Adam [68] optimizer (β1 = 0.9 and β2 = 0.999) and Cosine Annealing [69] scheduler to train Bi SRNet for 300 epochs on a single RTX 2080 GPU. Training samples are patches with spatial sizes of 256 256 and 96 96 randomly cropped from 28-channel 3D HSI data cubes for simulation and real experiments. The shifting step d is 2. The batch size is 2. We set the basic channel C = Nλ = 28 to store HSI information. We use random flipping and rotation for data augmentation. The training loss function is the root mean square error (RMSE) between reconstructed and ground-truth HSIs. |