Deep Equilibrium Models for Snapshot Compressive Imaging
Authors: Yaping Zhao, Siming Zheng, Xin Yuan
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a variety of datasets and real data, both quantitative and qualitative evaluations of our results demonstrate the effectiveness and stability of our proposed method. |
| Researcher Affiliation | Academia | Westlake University, Hangzhou, China The University of Hong Kong, Pokfulam, Hong Kong SAR, China Computer Network Information Center, Chinese Academy of Science, Beijing, China University of Chinese Academy of Sciences, Beijing, China zhaoyp@connect.hku.hk, zhengsiming@cnic.cn, xyuan@westlake.edu.cn |
| Pseudocode | No | The paper describes methods using mathematical equations and figures (e.g., Fig. 3, Fig. 4) but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and models are available at: https://github.com/IndigoPurple/DEQSCI |
| Open Datasets | Yes | Following BIRNAT (Cheng et al. 2020), we choose the dataset DAVIS2017 (Pont-Tuset et al. 2017) for training. |
| Dataset Splits | No | DAVIS2017 has 90 scenes and in total 6208 frames. We crop its video frames to video patch cubes with the spatial size of 256 256 8, and obtain 26, 000 training samples with data augmentation. |
| Hardware Specification | No | The paper discusses memory requirements and processing time but does not provide specific details on the hardware (e.g., GPU/CPU models, RAM) used for experiments. |
| Software Dependencies | No | To quickly calculate the vector-Jacobian products in Eq. (22) and Eq. (23), a lot of auto-differentiation tools (e.g., autograd packages in Pytorch(Paszke et al. 2019)) could be utilized. |
| Experiment Setup | Yes | Then we train the neural network for 30 epochs. The initial learning rate is 1 10 3 and learning rate decayed is 10% every 10 epochs. During training, we utilize Anderson acceleration for both the forward and backward pass fixed-point iterations. |