Learning Localized Generative Models for 3D Point Clouds via Graph Convolution

Authors: Diego Valsesia, Giulia Fracastoro, Enrico Magli

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 EXPERIMENTS We tested the proposed architecture by using three classes of point clouds taken from the Shape Net repository (Chang et al., 2015): chair , airplane and sofa . To the best of our knowledge this is the first work addressing GANs for point clouds learning localized features. We compare the proposed GAN for point cloud generation with other GANs able to deal with unordered sets of points.
Researcher Affiliation Academia Diego Valsesia Politecnico di Torino Torino, Italy diego.valsesia@polito.it Giulia Fracastoro Politecnico di Torino Torino, Italy giulia.fracastoro@polito.it Enrico Magli Politecnico di Torino Torino, Italy enrico.magli@polito.it
Pseudocode No The paper describes the model architecture and operations verbally and with diagrams, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code, nor does it provide any links to a code repository.
Open Datasets Yes We tested the proposed architecture by using three classes of point clouds taken from the Shape Net repository (Chang et al., 2015): chair , airplane and sofa .
Dataset Splits No The paper mentions training models and using a test set, but it does not provide specific percentages or counts for training, validation, or test dataset splits. It references the 'test set' in the context of evaluation metrics but no detailed split information for reproduction.
Hardware Specification Yes We thank Nvidia for donating a Quadro P6000 GPU for this work.
Software Dependencies No The paper mentions activation functions (Leaky ReLUs) and an optimization method (RMSProp), but it does not specify any software dependencies with version numbers (e.g., deep learning frameworks, programming language versions, or libraries).
Experiment Setup Yes The generator architecture is reported in Table 1. The graph is built by selecting the 20 nearest neighbors in terms of Euclidean distance in the feature space. We use Leaky Re LUs as nonlinearities and RMSProp as optimization method with a learning rate equal to 10 4 for both generator and discriminator. Batch normalization follows every graph convolution. The gradient penalty parameter of the WGAN is 1 and the discriminator is optimized for 5 iterations for each generator step. The models have been trained for 1000 epochs.