Autonomous Driving with Spiking Neural Networks

Authors: Rui-Jie Zhu, Ziqing Wang, Leilani Gilpin, Jason Eshraghian

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluated on the nu Scenes dataset, SAD achieves competitive performance in perception, prediction, and planning tasks, while drawing upon the energy efficiency of SNNs.
Researcher Affiliation Academia Rui-Jie Zhu1 , Ziqing Wang2 , Leilani Gilpin1, Jason K. Eshraghian1 1University of California, Santa Cruz, USA 2Northwestern University, USA
Pseudocode No The paper includes mathematical equations describing operations within the encoder (e.g., equations 10, 11, 12 in Appendix A) and the LIF model (equations 1, 2, 3 in Section 3.1), but it does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks/sections.
Open Source Code Yes Our code is available at https://github.com/ridgerchu/SAD.
Open Datasets Yes We evaluate the proposed model using the nu Scenes dataset [78] and The nu Scenes dataset [78] is a comprehensive, publicly available dataset tailored for autonomous driving research.
Dataset Splits Yes We evaluate the proposed model using the nu Scenes dataset [78] with 20 epochs with 4 NVIDIA A100 80GB. as is done in ST-P3 [70]. For our experiments, we consider 1.0s of historical context and predict 2.0s into the future, which corresponds to processing 3 past frames and predicting 4 future frames.
Hardware Specification Yes We evaluate the proposed model using the nu Scenes dataset [78] with 20 epochs with 4 NVIDIA A100 80GB.
Software Dependencies No The paper mentions the use of optimizers like 'Lamb' and 'Adam' but does not provide specific version numbers for software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used in the implementation.
Experiment Setup Yes We evaluate the proposed model using the nu Scenes dataset [78] with 20 epochs with 4 NVIDIA A100 80GB. and Pre-train the STM on Image Net-1K for 300 epochs... The input size is set to 224 224, and the batch size is set to 128 or 256 during the 310 training epochs with a cosine-decay learning rate whose initial value is 0.0005. The optimizer used is Lamb.