Quasi-Monte Carlo for 3D Sliced Wasserstein

Authors: Khai Nguyen, Nicola Bariletto, Nhat Ho

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments on various 3D tasks, such as point-cloud comparison, point-cloud interpolation, image style transfer, and training deep point-cloud autoencoders, to demonstrate the favorable performance of the proposed QSW and RQSW variants1.
Researcher Affiliation Academia Khai Nguyen, Nicola Bariletto & Nhat Ho Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712, USA {khainb,nicola.bariletto,minhnhat}@utexas.edu
Pseudocode Yes Algorithm 1 Monte Carlo estimation of the Sliced Wasserstein distance. Algorithm 2 Quasi-Monte Carlo approximation of the sliced Wasserstein distance. Algorithm 3 Randomized Quasi-Monte Carlo estimation of the Sliced Wasserstein distance with scrambling. Algorithm 4 The Randomized Quasi-Monte Carlo estimation of sliced Wasserstein distance with random rotation.
Open Source Code Yes 1Code for the paper is published at https://github.com/khainb/Quasi-SW.
Open Datasets Yes We select randomly four point-clouds (1, 2, 3, and 4 with 3 dimensions, 2048 points) from Shape Net Core-55 dataset (Chang et al., 2015) as shown in Figure 1.
Dataset Splits No The paper discusses training and testing, and mentions L=100 as the number of projections, but does not provide specific details on how the dataset was split into train/validation/test sets by percentages or counts.
Hardware Specification Yes We use a single NVIDIA V100 GPU to conduct experiments on training deep point-cloud autoencoder. Other applications are done on a desktop with an Intel core I5 CPU chip.
Software Dependencies No The paper mentions "POT library, Flamary et al. (2021)" but does not specify its version number. It also mentions "Point-Net Qi et al. (2017) architecture" but no software versions are provided for this or other dependencies.
Experiment Setup Yes We aim to optimize the following objective minϕ,γ EX µ(X)[SWp(PX, Pgγ(fϕ(X)))], where µ(X) is our data distribution, fϕ and gψ are a deep encoder and a deep decoder with Point-Net Qi et al. (2017) architecture. To optimize the objective, we use conventional MC estimation, QSW, and RQSW to approximate the gradient ϕ and ψ. We then utilize the standard SGD optimizer to train the autoencoder (with an embedding size of 256) for 400 epochs with a learning rate of 1e-3, a batch size of 128, a momentum of 0.9, and a weight decay of 5e-4.