Asynchronous Event Processing with Local-Shift Graph Convolutional Network
Authors: Linhui Sun, Yifan Zhang, Jian Cheng, Hanqing Lu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the computational cost can be reduced by nearly 9 times through using local-shift operation and the proposed asynchronous procedure can further improve the inference efficiency, while achieving state-of-the-art performance on gesture recognition and object recognition. Experiments show that the proposed method achieves state-of-the-art results on object recognition and gesture recognition, while significantly reducing the computational complexity compared with previous methods. |
| Researcher Affiliation | Academia | Linhui Sun1,2,3, Yifan Zhang1,2,3,*, Jian Cheng1,2,3, Hanqing Lu1,2 1 Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, China 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, 100049, Beijing, China 3 AIRIA, 211135, Nanjing, China |
| Pseudocode | No | The paper does not contain any figures or sections explicitly labeled 'Pseudocode' or 'Algorithm', nor are there structured steps formatted like code. |
| Open Source Code | No | The paper does not contain any statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | Experiments are conducted on four commonly used event-based datasets, including DVS128 Gesture Dataset (Amir et al. 2017), N-Cars (Sironi et al. 2018), MNIST-DVS (Orchard et al. 2015a), and CIFAR10-DVS (H et al. 2017). |
| Dataset Splits | No | In the training phase, since the feature update achieved by asynchronous event processing procedure is equivalent to recalculating the full graph nodes, the LSNet can be trained on batches of sliding windows through backpropagation. In this paper, sliding windows are obtained based on a fixed time interval T. The cross-entropy loss function with label smoothing is adopted for training. |
| Hardware Specification | Yes | The proposed method is implemented by Py Torch, which is trained on TITAN RTX GPU. |
| Software Dependencies | No | The proposed method is implemented by Py Torch, which is trained on TITAN RTX GPU. |
| Experiment Setup | Yes | For the first localshift layer, K and R in ball-query strategy are set as 8, 0.06 and 16, 0.12 for the upper and low paths, respectively. For the second layer, K and R are set as 16, 0.12 and 32, 0.24, respectively. In pooling layers, the number of representative nodes are 512 and 256. The batch size is 64 and Adam optimizer (Kingma and Ba 2015) is adopted with an initial learning rate of 0.001 multiplied by 0.5 every 20 epochs. |