SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection
Authors: Haimei Zhao, Qiming Zhang, Shanshan Zhao, Zhe Chen, Jing Zhang, Dacheng Tao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate the effectiveness and superiority of Sim Distill over state-of-the-art methods, achieving an improvement of 4.8% m AP and 4.1% NDS over the baseline detector. The source code will be released at https://github.com/Vi TAE-Transformer/Sim Distill. |
| Researcher Affiliation | Academia | 1School of Computer Science, The University of Sydney, Australia, 2School of Computing, Engineering and Mathematical Sciences, La Trobe University, Australia |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code will be released at https://github.com/Vi TAE-Transformer/Sim Distill. |
| Open Datasets | Yes | We follow the common practice (Huang et al. 2021; Liu et al. 2023; Liang et al. 2022; Li et al. 2023b; Chen et al. 2023) to evaluate our method on the most challenging benchmark, i.e., nu Scenes (Caesar et al. 2020). |
| Dataset Splits | Yes | It comprises 700 scenes for training, 150 scenes for validation, and 150 scenes for testing. |
| Hardware Specification | Yes | Our method is implemented with Py Torch using 8 NVIDIA A100 (40G Memory), based on the MMDetection3D codebase (Contributors 2020). |
| Software Dependencies | No | The paper mentions "Py Torch" and "MMDetection3D codebase" but does not specify their version numbers. |
| Experiment Setup | Yes | train the student model for 20 epochs with batch size 24. |