Unbalanced Optimal Transport through Non-negative Penalized Linear Regression
Authors: Laetitia Chapel, Rémi Flamary, Haoran Wu, Cédric Févotte, Gilles Gasso
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform several numerical experiments on simulated and real data illustrating the new algorithms, and provide a detailed discussion about more sophisticated optimization tools that can further be used to solve OT problems thanks to our reformulation. Our new families of algorithms (MM for general UOT, LARS for ℓ2-penalized UOT) are showcased in the numerical experiments of Section 4. |
| Researcher Affiliation | Academia | Laetitia Chapel IRISA, Université Bretagne-Sud Vannes, France laetitia.chapel@irisa.fr Rémi Flamary CMAP, Ecole Polytechnique Palaiseau, France remi.flamary@polytechnique.edu Haoran Wu LITIS & IRISA Rouen & Vannes, France haoran.wu@univ-ubs.fr Cédric Févotte IRIT, Université de Toulouse, CNRS Toulouse, France cedric.fevotte@irit.fr Gilles Gasso LITIS, INSA Rouen Normandie Rouen, France gilles.gasso@insa-rouen.fr |
| Pseudocode | Yes | Algorithm 1 Regularization path of ℓ2-penalized UOT |
| Open Source Code | Yes | Python implementation of the algorithms, provided in supplementary, will be released with MIT license on Git Hub. The connection between UOT and linear regression that we reveal in the paper opens the door to further fruitful developments and in particular to more efficient algorithms, thanks to the large literature dealing with non-negative penalized linear regression. Our implementation of the regularization path has been contributed to POT Flamary et al. (2021) and the MM algorithms are provided in the repository https://github.com/lchapel/ UOT-though-penalized-linear-regression. |
| Open Datasets | Yes | Let the source X be a set of 400 MNIST digits sampled from the digits 0, 1, 2, 3 (100 points per class) and let the target Y be a set of digits 0, 1 of MNIST (Le Cun et al., 2010) and of digits 8, 9 from Fashion MNIST (Xiao et al., 2017). |
| Dataset Splits | No | The paper mentions that a "validation set can be used here to select the best λ" but does not provide specific details on train/validation/test splits (percentages, counts, or explicit predefined splits used). |
| Hardware Specification | No | The paper mentions that algorithms "can be instantiated on GPU" and that performance comparisons involved "both CPU and GPU" but does not provide specific hardware details such as GPU models, CPU models, or memory. |
| Software Dependencies | No | The paper mentions software like Sci Py, Celer, and Scikit-learn, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | In practice, authors often set λ1 = λ2 = λ for UOT in order to reduce the necessity of hyperparameter tuning. One can notice that, as the number of classified points increases (with λ), the overall accuracy increases as more and more points are well classified while the current accuracy remains stable until outliers are included in the labeled set. provided that the transported mass of the target point is greater than 0.25bj. |