Learning to Detect Objects with a 1 Megapixel Event Camera

Authors: Etienne Perot, Pierre de Tournemire, Davide Nitti, Jonathan Masci, Amos Sironi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we address all these problems in the context of an event-based object detection task. First, we publicly release the first high-resolution large-scale dataset for object detection. ... Second, we introduce a novel recurrent architecture for eventbased detection and a temporal consistency loss for better-behaved training. ... Experiments on the dataset introduced in this work, for which events and gray level images are available, show performance on par with that of highly tuned and studied frame-based detectors. ... In this section, we first evaluate the importance of the main components of our method in an ablation study. Then, we compare it against state-of-the-art detectors. We consider the COCO metrics [46] and we report COCO mAP, as is it widely used for evaluating detection algorithms.
Researcher Affiliation Industry Etienne Perot PROPHESEE, Paris eperot@prophesee.ai Pierre de Tournemire PROPHESEE, Paris pdetournemire@prophesee.ai Davide Nitti PROPHESEE, Paris dnitti@prophesee.ai Jonathan Masci NNAISENSE, Lugano jonathan@nnaisense.com Amos Sironi PROPHESEE, Paris asironi@prophesee.ai
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Evaluation code at github.com/prophesee-ai/prophesee-automotive-dataset-toolbox
Open Datasets Yes First, we publicly release the first high-resolution large-scale dataset for object detection. ... The dataset we release contains more than 14 hours of driving recording, acquired in a large variety of scenarios. ... Dataset available at: prophesee.ai/category/dataset/
Dataset Splits Yes At the end of the recording campaign, a total of 14.65 hours was obtained. We split them in 11.19 hours for training, 2.21 hours for validation, and 2.25 hours for testing.
Hardware Specification Yes We also report the number of parameters of the networks and the methods runtime, including both events preprocessing and detector inference, on a i7 CPU at 2.70GHz and a GTX980 GPU.
Software Dependencies No The paper mentions several software components like ADAM [47], Retina Net [42], ResNet50 [54], SSD [37], COCO metrics [46], but it does not specify any version numbers for these software packages or programming languages.
Experiment Setup Yes All networks are trained for 20 epochs using ADAM [47] and learning rate 0.0002 with exponential decay of 0.98 every epoch.