EGSST: Event-based Graph Spatiotemporal Sensitive Transformer for Object Detection
Authors: Sheng Wu, Hang Sheng, Hui Feng, Bo Hu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we introduce the two datasets utilized, the evaluation metrics, and the implementation details of our models. We train the baseline model, EGSST-B, and the extended model, EGSST-E, and compare their performance with other state-of-the-art models applied to both datasets. Detailed ablation studies are then performed to assess the impact of various components of our models. |
| Researcher Affiliation | Academia | Sheng Wu1 Hang Sheng1 Hui Feng1,2 Bo Hu1,2 1 School of Information Science and Technology, Fudan University 2 State Key Laboratory of Integrated Chips and Systems, Fudan University |
| Pseudocode | No | The paper describes its method in detail but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code can be found at: EGSST. |
| Open Datasets | Yes | Two complex event camera datasets from traffic scenarios are employed in the experiments: the Gen1 Automotive Detection Dataset [46] and the 1 Megapixel Automotive Detection Dataset [9]. |
| Dataset Splits | No | The paper mentions using datasets for training and testing, and sets a "training batch size", but it does not explicitly provide details on the training/validation/test splits, such as percentages, sample counts, or references to predefined splits. |
| Hardware Specification | Yes | The models are trained on RTX3090 GPUs using the Lightning framework...we conducte additional tests on the T4 GPU, which has performance comparable to the Titan Xp and RTX 1080Ti. |
| Software Dependencies | Yes | The framework proposed in this study is developed using Python 3.9 and Py Torch 2.0, with graph processing powered by the advanced Py Torch Geometric library [48]. |
| Experiment Setup | Yes | We employ the Adam optimizer [49] coupled with the One Cycle learning rate schedule [50], which includes 100 warm-up iterations followed by cosine decay starting from the maximum learning rate. The training batch size is set at 8, with an initial learning rate of 1e-6. |