MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection
Authors: Yuxue Yang, Lue Fan, Zhaoxiang Zhang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate its effectiveness in nu Scenes, Waymo Open Dataset, and KITTI, employing various detectors. Mix Sup achieves up to 97.31% of fully supervised performance, using cheap cluster annotations and only 10% box annotations. |
| Researcher Affiliation | Academia | Yuxue Yang1,2,3,5 Lue Fan2,3,5 Zhaoxiang Zhang1,2,3,4,5 1School of Artificial Intelligence, UCAS 2University of Chinese Academy of Sciences (UCAS) 3Institute of Automation, Chinese Academy of Sciences (CASIA) 4Centre for Artificial Intelligence and Robotics (HKISI CAS) 5State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS) {yangyuxue2023,fanlue2019,zhaoxiang.zhang}@ia.ac.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Brave Group/Point SAM-for-Mix Sup. |
| Open Datasets | Yes | nu Scenes (Caesar et al., 2020) is a popular dataset for autonomous driving research. Waymo Open Dataset (WOD) (Sun et al., 2020) is a widely recognized dataset utilized for 3D object detection. KITTI (Geiger et al., 2012) is one of the earliest datasets for 3D detection evaluation. |
| Dataset Splits | Yes | We randomly choose 10% and 1% of ground truth boxes to serve as box-level labels. |
| Hardware Specification | Yes | all experiments are conducted in 8 RTX 3090 GPUs. |
| Software Dependencies | No | The implementation of Mix Sup is based popular codebases MMDetection3D (Contributors, 2020) and Open PCDet (Team, 2020). Specific version numbers for these or other dependencies are not provided. |
| Experiment Setup | No | The paper states 'The training schedule and hyperparameters are all the same as the fully-supervised training,' but does not explicitly list concrete hyperparameter values or detailed training configurations. |