EventRPG: Event Data Augmentation with Relevance Propagation Guidance
Authors: Mingyuan Sun, Donghao Zhang, Zongyuan Ge, WANG Jiaxu, Jia Li, Zheng Fang, Renjing Xu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposed method has been evaluated on several SNN structures, achieving state-of-the-art performance in object recognition tasks including N-Caltech101, CIFAR10-DVS, with accuracies of 85.62% and 85.55%, as well as action recognition task SL-Animals with an accuracy of 91.59%. |
| Researcher Affiliation | Collaboration | 1The Hong Kong University of Science and Technology (Guangzhou) 2Northeastern University 3Seeing Machines 4Monash University 5Peking University |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/myuansun/Event RPG. |
| Open Datasets | Yes | N-Caltech101 The neuromorphic version (Orchard et al., 2015) of Caltech101 (Fei-Fei et al., 2004)... CIFAR10-DVS The DVS version (Li et al., 2017) of CIFAR10 (Krizhevsky et al., 2009)... DVS128 Gesture A hand gesture dataset (Amir et al., 2017)... SL-Animals-DVS A sign language dataset (Vasudevan et al., 2021)... |
| Dataset Splits | No | The paper specifies training and test splits for datasets (e.g., N-Caltech101 split into 9:1 training and test set), but does not explicitly mention or specify a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Spikingjelly (Fang et al., 2020)' and 'TET (Deng et al., 2022)' to build models, but these are references to papers and not specific software names with version numbers as required for reproducibility. |
| Experiment Setup | Yes | Other hyper-parameters are shown in table 7. Table 7 details Neural Networks, Neuron Model, Datasets, Epoch, Batch Size, Timesteps, and Learning Rate, e.g., 'Spike-VGG11 LIF N-Caltech101 100 16 10 1 10 3'. |