Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Gromov-Wasserstein-like Distances in the Gaussian Mixture Models Space
Authors: Antoine Salmona, Agnes Desolneux, Julie Delon
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5, we illustrate the pratical use of our distances on medium-to-large scale problems such as shape matching and hyperspectral image color transfer and we compare the performance of our methods with other recent GW based approaches, both on assessing distances between clouds on points and drawing correspondences between points. All the proofs are postponed to the appendix. |
| Researcher Affiliation | Academia | Antoine Salmona ENS Paris Saclay, CNRS, Centre Borelli UMR 9010 Julie Delon Université de Paris, CNRS, MAP5 UMR 8145 and Institut Universitaire de France Agnès Desolneux ENS Paris Saclay, CNRS, Centre Borelli UMR 9010 |
| Pseudocode | Yes | Algorithm 1 Mixture Embedded Wasserstein solver Require: µ = PK k akµk, ½ = PL l bl½l, P {0} Vd (Rd), η > 0. 1: while not converged do 2: [C]k,l W 2 2 (µk, ½l) for k = 1, . . . , K; l = 1, . . . , L 3: ω{i} Solve-OT(a, b, C) Solve a classic OT problem. 4: while not converged do Do projected gradient descent on P. 5: A P {i 1} η Jω{i}(P {i 1})/ P 6: U, Σ, V T SVD(A) 7: P {i} U Id[d,d ] d V T 8: end while 9: end while 10: return ω, P Algorithm 2 Annealed initialization procedure for mixture embedded Wasserstein Require: a, b, {m0k}K k , {m1l}L l , ε0 > 0, α (0, 1), P {0} = Id[d,d ] d 1: for i = 1, . . . , Nit do 2: [C]k,l m0k P {i 1}m1l 2 3: ω{i} ε-OT(a, b, C, εi 1) Solve a regularized OT problem. 4: A P k,l ω{i} k,l m0km T 1l 5: U, Σ, V T SVD(A) 6: P {i} = U Id[d,d ] d V T 7: εi αεi 1 Annealing scheme. 8: end for 9: return P |
| Open Source Code | Yes | Code is available here6. 6https://github.com/Antoine Salmona/Mixture Gromov Wasserstein |
| Open Datasets | Yes | To illustrate the pratical use of MGW2 on a simple toy example, we draw 150 samples from the spiral dataset provided in the scikit-learn toolbox4 (Pedregosa et al., 2011) and we apply rotations with various angles on this dataset. Here we repoduce the experiment of the galloping horse, that has been originally conducted in Rustamov et al. (2013) and presented in Solomon et al. (2016) We reproduce here an experiment from Chowdhury et al. (2021). The goal is to match 3D meshes from the CAPOD dataset (Papadakis, 2014) To demonstrate the usability of our methods in larger scale settings, we use the SHREC 19 dataset (Melzi et al., 2019) that contains human shaped meshes |
| Dataset Splits | No | The paper describes using specific datasets or portions of them (e.g., "150 samples from the spiral dataset", "45 meshes representing a galloping horse", "3D meshes from the CAPOD dataset", "SHREC 19 dataset") for evaluation tasks like shape matching and color transfer. However, it does not specify any training/test/validation splits for these datasets in the context of developing or evaluating a model in a typical machine learning setup (e.g., 80/10/10 split, k-fold cross-validation, or explicit predefined splits for model training). |
| Hardware Specification | No | The paper mentions "implementation on CPU" for SGW in the runtimes comparison, but it does not specify any particular CPU model or other hardware components (like GPU models, memory, or specific computing platforms) used for running their own experiments or for the general comparisons. |
| Software Dependencies | No | In all our experiments, we use the numerical solvers provided by the Python Optimal Transport (POT) package5 (Flamary et al., 2021) that implements solvers for the non-regularized and regularized classic OT and GW problems. 5The package is accessible here: https://pythonot.github.io/. 4The package is accessible here: https://scikit-learn.org/stable/. The paper mentions the "Python Optimal Transport (POT) package" and "scikit-learn toolbox" but does not specify their version numbers. |
| Experiment Setup | Yes | More precisely, we propose to set the initial P as the solution of the following iterative procedure. First we solve an entropic-regularized W2 problem between the two discrete measures µ = Pk akδm0k and ½ = Pk blδm1l with a large value of regularization ε0 in order to obtain a coupling ω{1}. Then we set... In practice, we set in all our experiments α = 0.95 and ε0 = 1 as in Alvarez-Melis et al. (2019). Furthermore we observed that in most cases, setting Nit = 10 was sufficient to obtain a good initialization of P for Algorithm 1. For MGW2, we use GMMs with respectively K = {10, 20, 50} components. For this experiment, we observed that setting the number of Gaussian components to K = 15 was a good compromise between capturing the complexity of the color distributions and obtaining a relatively regular mapping Tmean. |