The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation

Authors: Thibault Sejourne, Francois-Xavier Vialard, Gabriel Peyré

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we provide numerical experiments on synthetic examples and domain adaptation data with a Positive-Unlabeled learning task to highlight the salient features of the unbalanced divergence and its potential applications in ML.
Researcher Affiliation Academia Thibault Séjourné Ecole Normale Supérieure, DMA, PSL thibault.sejourne@ens.fr François-Xavier Vialard Université Gustave Eiffel francois-xavier.vialard@u-pem.fr Gabriel Peyré Ecole Normale Supérieure, DMA, CNRS, PSL gabriel.peyre@ens.fr
Pseudocode Yes Algorithm 1 UGW(X, Y, ρ, ε) Input: mm-spaces (X, Y), relax. ρ, regul. ε Output: π, γ solving (6)
Open Source Code Yes All implementations are available at https: //github.com/thibsej/unbalanced_gromov_wasserstein, and installable in Python with the command pip install unbalancedgw.
Open Datasets Yes We consider PU learning over the Caltech office dataset used for domain adaptation tasks (with domains Caltech (C) Griffin et al. [2007], Amazon (A), Webcam (W) and DSLR (D) Saenko et al. [2010]).
Dataset Splits Yes We report the accuracy of the prediction over the same 20 folds of the datasets, and use 20 other folds to validate the parameters of UGW. ... The value (ρ1, ρ2) {2 k, k J5, 10K}2 are cross validated for each task on the validation folds, and we report the average accuracy on the testing folds.
Hardware Specification No The paper mentions that the algorithm is “GPU-friendly” but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions 'installable in Python with the command pip install unbalancedgw' and refers to the 'unbalanced Sinkhorn algorithm', but does not specify Python version, or any other software dependencies with version numbers (e.g., specific deep learning frameworks or numerical libraries).
Experiment Setup Yes We set ε = 2 9, which avoids introducing an extra parameter in the method. The value (ρ1, ρ2) {2 k, k J5, 10K}2 are cross validated for each task on the validation folds, and we report the average accuracy on the testing folds. We consider 100 random samples for each fold of (X, Y ), a ratio of positive samples r = 0.1 for domains (C,A,W,D), and a ratio r = 0.2 for domains (C,A,W).