Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency
Authors: Yue Li, Yi Sun, Shida Sun, Juntian Ye, Yueyi Zhang, Feihu Xu, Zhiwei Xiong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments showcase the superior performance of our method on both synthetic data and realworld data captured by different imaging setups. |
| Researcher Affiliation | Academia | University of Science and Technology of China {yueli65,sunyi2017,jt141884,sdsun}@mail.ustc.edu.cn {zhyuey,feihuxu,zwxiong}@ustc.edu.cn |
| Pseudocode | No | The paper describes the method procedurally and with architectural diagrams (e.g., Figure 3, Figure 4) but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and data are available at https://github.com/Depth2World/Dynamic_NLOS. |
| Open Datasets | Yes | The code and data are available at https://github.com/Depth2World/Dynamic_NLOS. The dataset is publicly available to propel research in dynamic imaging within this field. |
| Dataset Splits | No | We utilize 150 sequences for training and 17 sequences for synthetic testing. Besides, we utilize 4 sequences for real-world evaluation. The paper specifies training and testing sets but does not explicitly mention a separate validation set split. |
| Hardware Specification | Yes | All the experiments are conducted on the NVIDIA A100 GPUs, with a batch size of 4. |
| Software Dependencies | No | Our method is implemented using Py Torch, trained on the synthetic data, and then directly tested on the real-world data. The paper mentions PyTorch but does not specify a version number or list other software dependencies with version numbers. |
| Experiment Setup | Yes | During training, we employ the Adam W [47] as the optimizer with a learning rate of 10^-4 and a weight decay of 0.95. ... All the experiments are conducted on the NVIDIA A100 GPUs, with a batch size of 4. ... The hyper-parameter β and γ are set to 1 and 10^-5. α1, α2, α3 are set to 0.5, 1, and 0.1, respectively. |