SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from Point Cloud

Authors: Yan Wang, Junbo Yin, Wei Li, Pascal Frossard, Ruigang Yang, Jianbing Shen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments from Waymo to nu Scenes show that, with only 10% labeled target data, our SSDA3D can surpass the fully-supervised oracle model with 100% target label. Our code is available at https://github.com/yinjunbo/SSDA3D.
Researcher Affiliation Collaboration Yan Wang1*, Junbo Yin1*, Wei Li2, Pascal Frossard3, Ruigang Yang2, Jianbing Shen4 1Beijing Institute of Technology 2Inceptio 3 Ecole Polytechnique F ed erale de Lausanne (EPFL) 4SKL-IOTSC, CIS, University of Macau
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks. It describes the modules in text.
Open Source Code Yes Our code is available at https://github.com/yinjunbo/SSDA3D.
Open Datasets Yes Our experiments are conducted on two widely used datasets: Waymo(Sun et al. 2020) with 64-beam Li DAR and nu Scenes(Caesar et al. 2019) with 32-beam Li DAR.
Dataset Splits No The paper specifies how labeled target data is sampled for training (1%, 5%, 10%, 100% of nu Scenes training samples) and that the rest remain unlabeled. However, it does not explicitly define a separate validation dataset split for hyperparameter tuning or model selection in the context of reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No All the methods are implemented based on an advanced 3D detector, Center Point(Yin, Zhou, and Krahenbuhl 2021). The learning schedule follows the popular codebase Open PCDet(Team 2020), where the training epochs are set to 20 for both stages.
Experiment Setup Yes The learning schedule follows the popular codebase Open PCDet(Team 2020), where the training epochs are set to 20 for both stages. For both Inter-domain Point-Cut Mix and Intra-Domain Point Mix Up, there is a probability to decide whether to utilize the corresponding technique. The detection range is set to [-54.0, 54.0]m for X and Y axes, and [-5.0, 4.8]m for Z axis. We set the voxel size to [0.075, 0.075, 0.2]. As there is a difference in the range of intensity between Waymo and nu Scenes, we normalize it to 0 1 for both datasets. For augmentation techniques, we adopt widely used random world flip, random world rotation and random world scaling for both learning stages.