EditVAE: Unsupervised Parts-Aware Controllable 3D Point Cloud Shape Generation

Authors: Shidi Li, Miaomiao Liu, Christian Walder1386-1394

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive experimental results on SHAPENET which quantitatively demonstrates the superior performance of our method as a generator of point clouds. Experiments. Evaluation metrics. We evaluate our EDITVAE on the Shape Net (Chang et al. 2015) with the same data split as Shu, Park, and Kwon (2019) and report results on the three dominant categories of chair, airplane, and table. We adopt the evaluation metrics of Achlioptas et al. (2018), including Jensen-Shannon Divergence (JSD), Minimum Matching Distance (MMD), and Coverage (COV). As MMD and COV may be computed with either Chamfer Distance (CD) or Earth-Mover Distance (EMD), we obtain five different evaluation metrics, i.e. JSD, MMD-CD, MMDEMD, COV-CD, and COV-EMD. Baselines. We compare with four existing models of r GAN (Achlioptas et al. 2018), Valsesia (Valsesia, Fracastoro, and Magli 2018), TREEGAN (Shu, Park, and Kwon 2019) and MRGAN (Gal et al. 2020).
Researcher Affiliation Academia Shidi Li1, Miaomiao Liu1, Christian Walder1, 2 1Australian National University 2Data61, CSIRO {shidi.li, miaomiao.liu}@anu.edu.au, christian.walder@data61.csiro.au
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Code will be provided on publication of the paper.
Open Datasets Yes We evaluate our EDITVAE on the Shape Net (Chang et al. 2015) with the same data split as Shu, Park, and Kwon (2019) and report results on the three dominant categories of chair, airplane, and table.
Dataset Splits Yes We evaluate our EDITVAE on the Shape Net (Chang et al. 2015) with the same data split as Shu, Park, and Kwon (2019)...
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions the use of "ADAM optimizer" and "β-VAE framework" and refers to other architectures like "POINTNET" and "TREEGAN", but it does not specify version numbers for these software components or libraries.
Experiment Setup Yes We trained EDITVAE using the ADAM optimizer (Kingma and Ba 2015) with a learning rate of 0.0001 for 1000 epochs and a batch size of 30. To fine-tune our model we adopted the β-VAE framework (Higgins et al. 2016).