Point Cloud Processing via Recurrent Set Encoding

Authors: Pengxiang Wu, Chao Chen, Jingru Yi, Dimitris Metaxas5441-5449

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our RCNet on multiple benchmark datasets, including Model Net10/40 (Wu et al. 2015), Shape Net part segmentation (Yi et al. 2016), and S3DIS (Armeni et al. 2016). In addition, we analyze the properties of RCNet in details with extensive controlled experiments.
Researcher Affiliation Academia 1Department of Computer Science, Rutgers University, NJ, USA, {pw241, jy486, dnm}@cs.rutgers.edu 2Department of Biomedical Informatics, Stony Brook University, NY, USA, chao.chen.cchen@gmail.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states: "Code can be found on the authors homepage." This is too vague to be considered concrete access to source code as it does not provide a direct link to a repository nor explicitly state the code for the described work is released.
Open Datasets Yes In this section, we evaluate our RCNet on multiple benchmark datasets, including Model Net10/40 (Wu et al. 2015), Shape Net part segmentation (Yi et al. 2016), and S3DIS (Armeni et al. 2016).
Dataset Splits Yes Model Net10 is composed of 3991 train and 908 test CAD models from 10 classes, while Model Net40 consists of 12311 models from 40 categories, with 9843 models used for training and 2468 for testing. Following the setting in (Yi et al. 2016), we evaluate our methods assuming that the category of the input 3D shape is already known. The segmentation results are reported with the standard metric m Io U (Qi et al. 2017a). We use the official train/test split as in (Chang et al. 2015) in our experiment. As in (Qi et al. 2017a), we also use k-fold strategy for training and testing.
Hardware Specification Yes The networks are optimized using Adam (Kingma and Ba 2015), and it takes about 2 3 hours for the training to converge on a single NVIDIA GTX 1080 Ti GPU. The hardware used is an Intel i7-6850K CPU and a single NVIDIA GTX 1080 Ti GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies. It mentions optimizing with Adam and using GRU, but not the software framework (e.g., TensorFlow, PyTorch) or its version, nor other library versions.
Experiment Setup Yes We set the hyper-parameters r = 32 and s = 32. The learning rate is initialized to 0.001 with a decay of 0.1 every 30 epochs. The networks are optimized using Adam (Kingma and Ba 2015). We uniformly sample 1024 points from the mesh, and then normalize them to fit within a unit ball, centered at the origin. We apply data augmentation during the training procedure by randomly translating and scaling the objects, as well as perturbing the point positions.