Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction

Authors: Khai Nguyen, Dang Nguyen, Nhat Ho

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the effectiveness of our proposal, we evaluate our methods on the point-cloud reconstruction task and its two downstream tasks including transfer learning and point-cloud generation. The quantitative results are summarized in Table 1.
Researcher Affiliation Collaboration 1Department of Statistics and Data Sciences, University of Texas at Austin, USA 2Vin AI Research.
Pseudocode Yes Algorithm 1 Sampling from v MF distribution
Open Source Code Yes Code for the paper is published at https://github. com/hsgser/Self-Amortized-DSW.
Open Datasets Yes Our autoencoder is trained on the Shape Net Core-55 dataset (Chang et al., 2015) with a batch size of 128 and a point-cloud size of 2048. We train it for 300 epochs using an SGD optimizer with an initial learning rate of 1e-3, a momentum of 0.9, and a weight decay of 5e-4.
Dataset Splits Yes the chair category of Shape Net is divided into train/valid/test sets in an 85/5/10 ratio.
Hardware Specification Yes All experiments are run on NVIDIA V100 GPUs.
Software Dependencies No The paper mentions using an 'Adam optimizer (Kingma & Ba, 2014)' but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes Our autoencoder is trained on the Shape Net Core-55 dataset (Chang et al., 2015) with a batch size of 128 and a point-cloud size of 2048. We train it for 300 epochs using an SGD optimizer with an initial learning rate of 1e-3, a momentum of 0.9, and a weight decay of 5e-4.