Entropic Gromov-Wasserstein between Gaussian Distributions
Authors: Khang Le, Dung Q Le, Huy Nguyen, Dat Do, Tung Pham, Nhat Ho
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we will use the derived closed-forms to inspect the behavior of algorithms solving entropic Gromov Wasserstein problem, in particular those studied in (Peyré et al., 2016). We use the implementation of these algorithms in Python Optimal Transport library (Flamary & Courty, 2017). The implementation is available at https://github.com/lntk/egw_gaussians. |
| Researcher Affiliation | Collaboration | 1University of Texas at Austin 2École Polytechnique 3Vin AI Research 4University of Michigan, Ann Arbor. |
| Pseudocode | No | The paper refers to existing algorithms from prior work (e.g., Peyré et al., 2016) but does not provide its own pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The implementation is available at https://github.com/lntk/egw_gaussians. |
| Open Datasets | No | The paper generates synthetic data by sampling 'N (between 10 and 2000) data points from N(02, Σµ) and N(03, Σν)' or creating '1000-bin histograms of N(0, α) and N(0, β)', without referencing a publicly available dataset by name or providing a direct access link or citation. |
| Dataset Splits | No | The paper describes generating synthetic data for empirical studies but does not explicitly mention any training, validation, or test dataset splits. The experiments focus on comparing empirical behavior with theoretical closed-form expressions. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Python Optimal Transport library (Flamary & Courty, 2017)' but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The regularization parameter ε is chosen from {0.5, 1, 5}. and τ is fixed to 1. and The regularization parameter ε is set to 0.1, which satisfies all the constraints in Theorem 5.4. and The algorithm is run till convergence (with tolerance 10 9). |