EventDrop: Data Augmentation for Event-based Learning

Authors: Fuqiang Gu, Weicong Sng, Xuke Hu, Fangwen Yu

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two event datasets (N-Caltech101 and N-Cars) demonstrate that Event Drop can significantly improve the generalization performance across a variety of deep networks.
Researcher Affiliation Academia 1 College of Computer Science, Chongqing University, China 2 School of Computing, National University of Singapore, Singapore 3 Institute of Data Science, German Aerospace Center, Germany 4 Department of Precision Instrument, Tsinghua University, China
Pseudocode Yes Algorithm 1: Procedures of augmenting event data with Event Drop
Open Source Code Yes We have implemented Event Drop in Py Torch and the source code is available at https://github.com/fuqianggu/Event Drop.
Open Datasets Yes We evaluate the proposed Event Drop augmentation technique using two public event datasets: N-Caltech101 [Orchard et al., 2015] and N-Cars [Sironi et al., 2018].
Dataset Splits Yes We perform early stopping on a validation set using the splits provided by the EST [Gehrig et al., 2019] on N-Caltech101 and 20% of the training data on N-Cars.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper mentions implementing Event Drop in PyTorch, but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes The Adam optimizer is used to train the model by minimizing the cross-entropy loss. The initial learning rate is set to 1 10 4 until the iteration reaches up to 100, after which the learning rate is reduced by a factor of 0.5 every 10 iterations. The total number of iterations is set to 200. We use a batch size of 4 for both datasets.