Learning Transferable Features for Point Cloud Detection via 3D Contrastive Co-training
Authors: Zeng Yihan, Chunwei Wang, Yunbo Wang, Hang Xu, Chaoqiang Ye, Zhen Yang, Chao Ma
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our proposed 3D-Co Co effectively closes the domain gap and outperforms the state-of-the-art methods by large margins. We construct new domain adaptation benchmarks using three large-scale 3D datasets. Experimental results show that our proposed 3D-Co Co effectively closes the domain gap and outperforms the state-of-the-art methods by large margins. |
| Researcher Affiliation | Collaboration | 1 Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2 Huawei Noah s Ark Lab {zengyihan,weiwei0224,yunbow,chaoma}@sjtu.edu.cn {xu.hang,yechaoqiang,yang.zhen}@huawei.com |
| Pseudocode | Yes | Algorithm 1: The learning procedure of 3D contrastive co-training (3D-Co Co) |
| Open Source Code | No | The paper does not include an unambiguous statement where the authors explicitly state they are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | We evaluate 3D-Co Co on three widely used Li DAR-based datasets, including Waymo [26], nu Scenes [1], and KITTI [6]. |
| Dataset Splits | Yes | We set the maximum number of training epochs to 30 for KITTI and 20 for Waymo and nu Scenes, with a warm-up process taking half of the total epochs. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like "Voxel Net", "Point Pillars", and "Adam optimizer", but it does not specify version numbers for any of these or other key software libraries/frameworks (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | We set the voxel size to (0.1, 0.1, 0.15)m for Voxel Net and (0.1, 0.1)m for Point Pillars. We use the Adam optimizer [13] with a learning rate of 1.5 × 10−3. We set the maximum number of training epochs to 30 for KITTI and 20 for Waymo and nu Scenes, with a warm-up process taking half of the total epochs. For pseudo-labels generation, we apply a high-pass threshold of 0.7 to Io U to obtain foreground samples, and a low-pass threshold of 0.2 for background samples. |