Learning Optical Flow from Continuous Spike Streams
Authors: Rui Zhao, Ruiqin Xiong, Jing Zhao, Zhaofei Yu, Xiaopeng Fan, Tiejun Huang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our approach achieves state-of-the-art performance on existing synthetic datasets and real data captured by spike cameras. The source code and dataset are available at https://github.com/ruizhao26/Spike2Flow. |
| Researcher Affiliation | Academia | Rui Zhao1,2 Ruiqin Xiong1,2 Jing Zhao3 Zhaofei Yu1,2,4 Xiaopeng Fan5 Tiejun Huang1,2 1National Engineering Research Center of Visual Technology (NERCVT), Peking University 2Institute of Digital Media, School of Computer Science, Peking University 3National Computer Network Emergency Response Technical Team 4Institute for Artificial Intelligence, Peking University 5School of Computer Science and Technology, Harbin Institute of Technology |
| Pseudocode | No | The paper describes its approach and architecture with text and diagrams (e.g., Figure 1, Figure 4), but it does not include any explicit pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | The source code and dataset are available at https://github.com/ruizhao26/Spike2Flow. |
| Open Datasets | Yes | To train and evaluate the network in real scenes, based on scenes in Slow Flow [25], we generate flow fields and spike streams to construct a dataset, i.e., real scenes with spike and flow (RSSF). We use the raw data of Slow Flow to generate RSSF. The source code and dataset are available at https://github.com/ruizhao26/Spike2Flow. |
| Dataset Splits | No | The paper states, 'We select 11 scenes to generate the testing set and the other 30 scenes to generate the training set' for the RSSF dataset. While it specifies training and testing sets, it does not explicitly mention a separate validation split or how it was derived, only that the model was 'trained on the training set'. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. The author checklist explicitly states 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]' |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer [30]' but does not specify any software names with version numbers for libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages (e.g., Python version) that would enable reproducible setup. |
| Experiment Setup | Yes | In the experiments, we set N as 3 and T as 20, which means we jointly estimate optical flow under 20, 40, and 60 time steps difference. We set the number of input spike frames as 21. For constructing correlation, we set the multi-scale level as 3, and we set the looking-up radius r = 3. We randomly crop the spike stream to 320 × 448 spatially during the training procedure and set the batch size as 6. We use Adam optimizer [30] with β1 = 0.9 and β2 = 0.999. The learning rate is initially set as 3e-4 and scaled by 0.7 every 10 epoch. The model is trained for 100 epochs. |