Sliced Gromov-Wasserstein

Authors: Vayer Titouan, Rémi Flamary, Nicolas Courty, Romain Tavenard, Laetitia Chapel

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental results The goal of this section is to validate SGW and its rotational invariant on both quantitative (execution time) and qualitative sides.
Researcher Affiliation Academia Titouan Vayer Univ. Bretagne-Sud, CNRS, IRISA F-56000 Vannes titouan.vayer@irisa.fr Remi Flamary Univ. Cˆote d Azur, OCA, Lagrange F-06000 Nice remi.flamary@unice.fr Romain Tavenard Univ. Rennes, CNRS, LETG F-35000 Rennes romain.tavenard@univ-rennes2.fr Laetitia Chapel Univ. Bretagne-Sud, CNRS, IRISA F-56000 Vannes laetitia.chapel@irisa.fr Nicolas Courty Univ. Bretagne-Sud, CNRS, IRISA F-56000 Vannes nicolas.courty@irisa.fr
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper mentions using and implementing various tools (POT, Numpy, Pytorch) but does not provide a direct link or explicit statement for the release of their own source code for the Sliced Gromov-Wasserstein (SGW) method.
Open Datasets Yes As a first example, we use the spiral dataset from sklearn toolbox and compute GW, SGW and RISGW on n = 100 samples with L = 20 sampled lines for different rotations of the target distribution.
Dataset Splits No The paper mentions dataset sizes (e.g., n=100 samples, 2D random measures of n {1e2, ..., 1e6} points) but does not specify how these samples are split into training, validation, or test sets.
Hardware Specification Yes All the experiments were conducted on a standard computer equipped with a NVIDIA Titan X GPU.
Software Dependencies No The paper mentions software like "Python Optimal Transport (POT) toolbox", "Pytorch", "Pymanopt", and "autograd" but does not provide specific version numbers for these dependencies.
Experiment Setup Yes For SGW, the number of projections L is taken from {50, 200}. We use the Python Optimal Transport (POT) toolbox [46] to compute GW distance on CPU. For entropic-GW we use the Pytorch GPU implementation from [9] that uses the log-stabilized Sinkhorn algorithm [47] with a regularization parameter ε = 100. The Adam optimizer is used, with a learning rate of 2.10 4 and β1 = 0.5, β2 = 0.99.