PointCNN: Convolution On X-Transformed Points

Authors: Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, Baoquan Chen

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Point CNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks. and We conducted an extensive evaluation of Point CNN for shape classification on six datasets (Model Net40 [52], Scan Net [9], TU-Berlin [11], Quick Draw [15], MNIST, CIFAR10), and segmentation task on three datasets (Shape Net Parts [54], S3DIS [2], and Scan Net [9]).
Researcher Affiliation Collaboration Shandong University Huawei Inc. Peking University
Pseudocode Yes ALGORITHM 1: X-Conv Operator
Open Source Code Yes We open source our code at https://github.com/yangyanli/Point CNN to encourage further development.
Open Datasets Yes We conducted an extensive evaluation of Point CNN for shape classification on six datasets (Model Net40 [52], Scan Net [9], TU-Berlin [11], Quick Draw [15], MNIST, CIFAR10), and segmentation task on three datasets (Shape Net Parts [54], S3DIS [2], and Scan Net [9]).
Dataset Splits No The paper mentions training and testing sets, and uses a strategy for sampling points for training (N(N, (N/8)2) points are used for training), but it does not provide specific training/validation/test dataset splits (e.g., exact percentages or sample counts) for all datasets used, nor does it detail how validation sets were created or used beyond general mentions.
Hardware Specification Yes As shown in Table 6, we summarize our running statistics based with the model for classification with batch size 16, 1024 input points on n Vidia Tesla P100 GPU, in comparison with several other methods. and In addition, the model for segmentation with 2048 input points has 4.4M parameters runs on n Vidia Tesla P100 with batch size 12 at 0.61/0.25 second per batch for training/inference.
Software Dependencies No The paper mentions 'tensorflow [1]' as the implementation framework but does not provide a specific version number for TensorFlow or any other key software dependencies.
Experiment Setup Yes We implemented Point CNN in tensorflow [1], and use ADAM optimizer [21] with an initial learning rate 0.01 for the training of our models. As shown in Table 6, we summarize our running statistics based with the model for classification with batch size 16, 1024 input points... and Dropout is applied before the last fully connected layer to reduce over-fitting.