Fast Optimal Transport through Sliced Generalized Wasserstein Geodesics

Authors: Guillaume Mahey, Laetitia Chapel, Gilles Gasso, Clément Bonet, Nicolas Courty

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evidences support the benefits of min-SWGG in various contexts, from gradient flows, shape matching and image colorization, among others. 5 Experiments
Researcher Affiliation Academia Guillaume Mahey INSA Rouen Normandie Université Bretagne Sud LITIS IRISA guillaume.mahey@insa-rouen.fr Laetitia Chapel Université Bretagne Sud Institut Agro Rennes-Angers IRISA laetitia.chapel@irisa.fr Gilles Gasso INSA Rouen Normandie LITIS gilles.gasso@insa-rouen.fr Clément Bonet Université Bretagne Sud LMBA clement.bonet@univ-ubs.fr Nicolas Courty Université Bretagne Sud IRISA nicolas.courty@univ-ubs.fr
Pseudocode Yes Algorithm 1 Computing SWGG2 2(µ1, µ2, θ)
Open Source Code Yes All the code is available at https://github.com/Mahey G/SWGG
Open Datasets Yes In this experiment, we compare the following datasets: MNIST [39], EMNIST [18], Fashion MNIST [67], KMNIST [17] and USPS [32].
Dataset Splits No The paper uses datasets such as Gaussian distributions, images (for colorization), point clouds, and several standard image datasets (MNIST, EMNIST, etc.), but it does not specify explicit training, validation, and test dataset splits with percentages or sample counts for its own experiments.
Hardware Specification No The paper mentions that empirical runtime evaluation was performed "on GPU" but does not provide specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using "the Python OT Toolbox [28]" and "the Geomloss package [27]" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For optimizing min-SWGG, we use the Adam optimizer of Pytorch, with a fixed learning rate of 5e 2 during 100 iterations, considering s = 10 and ϵ = 1. [...] We fix a global learning rate of 5e 3 with an Adam optimizer. [...] The hyper parameters for the optimization of min-SWGG are s = 10 and ϵ = 0.5, except for the 500-dimensional Gaussian for which we pick ϵ = 10 .