A Dynamic GCN with Cross-Representation Distillation for Event-Based Learning
Authors: Yongjian Deng, Hao Chen, Youfu Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show our model and learning framework are effective and generalize well across multiple vision tasks. |
| Researcher Affiliation | Academia | Yongjian Deng1 4, Hao Chen2*, Youfu Li3 1College of Computer Science, Beijing University of Technology 2School of Computer Science and Engineering, Southeast University 3Department of Mechanical Engineering, City University of Hong Kong 4Engineering Research Center of Intelligence Perception and Autonomous Control, Ministry of Education, Beijing, China |
| Pseudocode | No | The paper describes algorithmic steps and refers to figures illustrating the method, but does not include a formal pseudocode block or an algorithm listing. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We select three challenging datasets commonly used for evaluating eventbased object classification, i.e., N-Cal (Orchard et al. 2015), N-C (Sironi et al. 2018), and CIF10 (Li et al. 2017) (Tab. 1). We choose the action recognition task to validate the advantages of our model in encoding motions using the DVS128 (Amir et al. 2017) dataset. |
| Dataset Splits | No | The paper mentions "test data" and describes training procedures and parameters, but does not provide explicit training/validation/test dataset splits (e.g., percentages or sample counts for each partition) or a detailed methodology for creating such splits. |
| Hardware Specification | Yes | We measure inference time on the N-C using Py Torch on an Nvidia RTX 3090 and an Intel i7-13700. |
| Software Dependencies | No | The paper mentions 'PyTorch' as software used for inference, but does not provide its specific version number or other software dependencies with version details. |
| Experiment Setup | Yes | We train them using the Adam optimizer with batch size 32 and an initial learning rate (lr) of 1e-4, which is reduced by a factor of 2 after 20 epochs. For the EDGCN, ... We use SGD optimizer with an initial lr of 1e-1 for object classification and action recognition, and reduce the lr until 1e-4 using cosine annealing. We choose Adam optimizer with batch size 32 for detection, and reduce lr starting from 1e-2 by a factor of 2 after 20 epochs. |