Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients
Authors: Xingyu Cui, Huanjing Yue, Song Li, Xiangjun Yin, Yusen Hou, Yun Meng, Kai Zou, Xiaolong Hu, Jingyu Yang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both simulated and real-world data validate the performance and generalization of our method. |
| Researcher Affiliation | Academia | 1School of Electrical and Information Engineering, Tianjin University, China 2School of Precision Instrument and Optoelectronic Engineering, Tianjin University, China 3Key Lab. of Optoelectronic Information Science and Technology, Ministry of Education, China |
| Pseudocode | No | The paper does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and model are available at https://github.com/Xingyu Cuii/Virtual-Scanning-NLOS. |
| Open Datasets | Yes | We generated 8,000 transients using the transient rasterizer from [8] with default parameters. ... To assess our method s generalization, we tested it on real-world data acquired by three different systems [10 12] and a self-built system (see SM for system details). |
| Dataset Splits | No | The paper mentions training and testing data but does not explicitly specify a separate validation dataset split with percentages or sample counts. |
| Hardware Specification | Yes | We compare the inference time of various methods on an Intel(R) Xeon(R) Platinum 8369B 2.90GHz CPU with 32 cores and an NVIDIA 3090 GPU, respectively. ... All models were trained on 2 NVIDIA 3090 GPUs, taking nearly 40 hours in total. |
| Software Dependencies | No | Our method is implemented using Py Torch [47], and we employ the Adam optimizer [48] with a weight decay of 10 8. |
| Experiment Setup | Yes | In the first stage, the SURE-based denoiser model Fϕ is trained with a batch size of 4 for 40 epochs. We set the initial learning rate to 1 10 3 and reduce it by a factor of 0.1 at epoch 30. Subsequently, in the second stage, the VSRnet model Fθ is trained with a batch size of 2 for 20 epochs, utilizing an initial learning rate of 5 10 4 and a reduction by a factor of 0.1 at epoch 10. In each epoch, we randomly select 40 complete simulated transients for each relay surface and extract signals from them to generate irregularly undersampled transients for training. All models were trained on 2 NVIDIA 3090 GPUs, taking nearly 40 hours in total. Regarding the loss function, the hyperparameters ε, β and b are set to 0.1, 0.001 and 4, respectively. |