Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks
Authors: Jesse Hagenaars, Federico Paredes-Valles, Guido de Croon
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments with various types of recurrent ANNs and SNNs using the proposed pipeline. We validate our proposals through extensive quantitative and qualitative evaluations on multiple datasets. |
| Researcher Affiliation | Academia | Micro Air Vehicle Laboratory Delft University of Technology, The Netherlands |
| Pseudocode | No | The paper describes network architectures and processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The project s code and additional material can be found at https://mavlab.tudelft.nl/event_flow/. |
| Open Datasets | Yes | train our networks on the indoor forward-facing sequences from the UZH-FPV Drone Racing Dataset [12], which is characterized by a much wider distribution of optical flow vectors than the datasets that we use for evaluation, i.e., MVSEC [54], High Quality Frames (HQF) [44], and the Event-Camera Dataset (ECD) [31]. |
| Dataset Splits | No | The paper does not explicitly define training/validation/test splits with percentages or counts for its own data. It uses different datasets for training and evaluation, but a dedicated validation split from the training data is not specified. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments. |
| Software Dependencies | No | Our framework is implemented in Py Torch. The paper mentions PyTorch but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | We use the Adam optimizer [24] and a learning rate of 0.0002, and train with a batch size of 8 for 100 epochs. We clip gradients based on a global norm of 100. We fix the number of events for each input partition to N = 1k, while we use 10k events for each training event partition. Lastly, we empirically set the scaling weight for Lsmooth to λ = 0.001. |