Outlier-Robust Optimal Transport
Authors: Debarghya Mukherjee, Aritra Guha, Justin M Solomon, Yuekai Sun, Mikhail Yurochkin
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate effectiveness of ROBOT, we consider the task of robust mean estimation under the Huber contamination model. We also use this simulated setup to study sensitivity to the cost truncation hyperparameter λ. In our second experiment, we present a new application of optimal transport enabled by ROBOT. |
| Researcher Affiliation | Collaboration | 1Department of Statistics, University of Michigan 2MIT-IBM Watson AI Lab 3Department of Statistical Science, Duke University 4MIT CSAIL 5IBM Research |
| Pseudocode | Yes | Algorithm 1: Generating optimal solution of F1 from F2. Algorithm 3: ROBOT GAN. |
| Open Source Code | No | The paper refers to third-party libraries (e.g., POT Python Optimal Transport library, CVXPY, Scikit-learn) that were used, but does not provide a specific link or explicit statement about the availability of the authors' own implementation code for ROBOT. |
| Open Datasets | Yes | The data is generated from (1 ϵ)N(η0, Id) + ϵN(η1, Id) and the goal is to estimate η0. Specifically, let νm be a clean dataset consisting of 10k MNIST digits from 0 to 4 and µn be a dataset collected in the wild consisting of (different) 8k MNIST digits from 0 to 4 and 2k outlier MNIST images of digits from 5 to 9. |
| Dataset Splits | No | The paper describes the composition of datasets used as input to their method (e.g., clean and contaminated MNIST digits) and performs multiple experiment restarts, but does not specify traditional training/validation/test dataset splits for model training or hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory, cloud instances). |
| Software Dependencies | No | The paper mentions several software tools and libraries such as "Python Optimal Transport (POT) library (Flamary & Courty, 2017)", "CVXPY (Diamond & Boyd, 2016)", and "Scikit-learn (Pedregosa et al., 2011)". However, it does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | Algorithm 3 details input parameters such as "robustness regularizion λ, entropic regularization α, data distribution µn ..., steps sizes τ and γ" and initialization steps. Section 4.1 states, "In the previous experiment, we set λ = 0.5." and discusses varying contamination proportions ϵ and means η0, η1. |