Learning Generalizable Part-based Feature Representation for 3D Point Clouds

Authors: Xin Wei, Xiang Gu, Jian Sun

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and ablation studies on 3DDA and 3DDG benchmarks justify the efficacy of the proposed approach for domain generalization, compared with the previous state-of-the-art methods.
Researcher Affiliation Academia Xin Wei, Xiang Gu, Jian Sun ( ) School of Mathematics and Statistics, Xi an Jiaotong University, P. R. China {wxmath, xianggu}@stu.xjtu.edu.cn, jiansun@xjtu.edu.cn
Pseudocode No The paper does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code will be available on http://github.com/weixmath/PDG.
Open Datasets Yes Sim-to-Real [22]. Real-to-Sim [22] is a 3DDG benchmark consisting of three domains: Model Net [12], Shape Net [11] and Scan Object NN [14]. Point DA [25]. Point DA [25] dataset is a widely used point cloud domain adaptation benchmark, which collects shapes of 10 shared classes from Model Net [12] (M), Shape Net [11] (S), and Scan Net [13] (S ).
Dataset Splits Yes We follow the data preparation and experiment setting in [22]. Specifically, we use the official training and test split strategy for each dataset. Each point cloud from three domains contains 2,048 points and is normalized within a unit ball.
Hardware Specification Yes For fair comparison, we do not change the architecture of backbone and train all methods except Meta Sets for 160 epochs with batch-size 32 on one NVIDIA V100 GPU. Meta Sets is trained for 200 epochs with batch-size 32 on two NVIDIA V100 GPUs.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as Python, PyTorch, TensorFlow, or other libraries used in the implementation.
Experiment Setup Yes For our PDG, we adopt Point Net and DGCNN as backbones of feature extractor fθ and classifier Cψ. For fair comparison, we do not change the architecture of backbone and train all methods except Meta Sets for 160 epochs with batch-size 32 on one NVIDIA V100 GPU. Meta Sets is trained for 200 epochs with batch-size 32 on two NVIDIA V100 GPUs. We use Adam as the optimizer. The initial learning rate and weight decay are 10 3, 10 4. The learning rate reduced to 10 5 following a cosine quarter-cycle. The part number M, points number k in each part, and the number of part-template features NH are respectively set to 8, 512, 384. λp and λC in training loss are 0.05 and 0.01.