Energy-Based Sliced Wasserstein Distance
Authors: Khai Nguyen, Nhat Ho
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct experiments on point-cloud gradient flow, color transfer, and point-cloud reconstruction to show the favorable performance of the EBSW1. ... 3. We apply the EBSW to various tasks including gradient flows, color transfer, and point-cloud applications. According to the experimental result, the EBSW performs better than existing projection-selection sliced Wasserstein variants... |
| Researcher Affiliation | Academia | Khai Nguyen Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712 khainb@utexas.edu Nhat Ho Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712 minhnhat@utexas.edu |
| Pseudocode | Yes | We provide the pseudo-code for computing the SW in Algorithm 1. ... We review the algorithm for computing the DSW in Algorithm 3. ... We provide the algorithm for the IS estimation of the EBSW in Algorithm 4. ... We first provide the algorithm for computing the EBSW via the SIR, the IMH, and the RMH in Algorithm 5-7. |
| Open Source Code | Yes | Code for this paper is published at https://github.com/khainb/EBSW. |
| Open Datasets | Yes | We choose µ(0) and ν are two point-cloud shapes in Shape Net Core-55 dataset [4]... We follow [33] to train point-cloud autoencoders with sliced Wasserstein distances on the Shape Net Core-55 dataset [4]. ... We evaluate the trained autoencoders on a different dataset: Model Net40 dataset [46]. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test splits with percentages or sample counts. It mentions training but not how the data was partitioned into these specific subsets. |
| Hardware Specification | Yes | For the point-cloud gradient flows and the color transfer, we use a Macbook Pro M1 for conducting experiments. For deep point-cloud reconstruction, experiments are run on a single NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions software components like "SGD optimizer" and "von Mises kernel density estimation" but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Here, we set L = 100 for SW, and EBSW variants. For the Max-SW we set T = 100, and report the best result for the step size for finding the max projecting direction in {0.001, 0.01, 0.1}. For the v-DSW, we report the best result for (L, T) {(10, 10), (50, 2), (2, 50)}, κ {1, 10, 50}, and the learning rate for finding the location in {0.001, 0.01, 0.1}. ... We train the autoencoder for 200 epochs using an SGD optimizer with a learning rate of 1e-3, a batch size of 128, a momentum of 0.9, and a weight decay of 5e-4. |