Large Scale Optimal Transport and Mapping Estimation

Authors: Vivien Seguy, Bharath Bhushan Damodaran, Remi Flamary, Nicolas Courty, Antoine Rolet, Mathieu Blondel

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase our proposed approach on two applications: domain adaptation and generative modeling.
Researcher Affiliation Collaboration Vivien Seguy Kyoto University Graduate School of Informatics; Bharath Bhushan Damodaran Universit e de Bretagne Sud IRISA, UMR 6074, CNRS; R emi Flamary Universit e Cˆote d Azur Lagrange, UMR 7293, CNRS, OCA; Nicolas Courty Universit e de Bretagne Sud IRISA, UMR 6074, CNRS; Antoine Rolet Kyoto University Graduate School of Informatics; Mathieu Blondel NTT Communication Science Laboratories
Pseudocode Yes Algorithm 1 Stochastic OT computation; Algorithm 2 Optimal map learning with SGD
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the described methodology.
Open Datasets Yes We consider the three cross-domain digit image datasets MNIST (Lecun et al., 1998), USPS, and SVHN (Netzer et al., 2011)
Dataset Splits No The paper specifies dataset sizes for training and target domains but does not explicitly provide percentages or counts for a separate validation split.
Hardware Specification No The paper does not provide specific hardware details used for running experiments.
Software Dependencies No The paper mentions 'Adam optimizer' but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Adam optimizer with batch size 1000 is used to optimize the network. The learning rate is varied in {2, 0.9, 0.1, 0.01, 0.001, 0.0001}. The learned Monge map f in Alg. 2 is parameterized as a neural network with two fully-connected hidden layers (d 200 500 d) and Re LU activations, and the weights are optimized using the Adam optimizer with learning rate equal to 10 4 and batch size equal to 1000. For the Sinkhorn algorithm, regularization value is chosen from {0.01, 0.1, 0.5, 0.9, 2.0, 5.0, 10.0}.