Mapping Estimation for Discrete Optimal Transport

Authors: Michaël Perrot, Nicolas Courty, Rémi Flamary, Amaury Habrard

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing. The paper includes a dedicated section '4 Experiments' with detailed tables (Table 1 and Table 2) showing accuracy results on Moons and Office-Caltech datasets, and Figure 2 illustrating image editing results, all of which are empirical evaluations.
Researcher Affiliation Academia Micha el Perrot Univ Lyon, UJM-Saint-Etienne, CNRS, Lab. Hubert Curien UMR 5516, F-42023 michael.perrot@univ-st-etienne.fr Nicolas Courty Universit e de Bretagne Sud, IRISA, UMR 6074, CNRS, courty@univ-ubs.fr R emi Flamary Universit e Cˆote d Azur, Lagrange, UMR 7293 , CNRS, OCA remi.flamary@unice.fr Amaury Habrard Univ Lyon, UJM-Saint-Etienne, CNRS, Lab. Hubert Curien UMR 5516, F-42023 amaury.habrard@univ-st-etienne.fr
Pseudocode Yes Algorithm 1: Joint Learning of L and γ. input :Xs, Xt source and target examples and λγ, λT hyper parameters. output:L, γ. 2 Initialize k = 0, γ0 Π and L0 = I 4 Learn γk+1 solving problem (6) with fixed Lk using a Frank-Wolfe approach. 5 Learn Lk+1 using Equation (9), (12) or their biased counterparts with fixed γk+1. 6 Set k = k + 1. 7 until convergence
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository for their proposed methodology.
Open Datasets Yes We consider two domain adaptation (DA) datasets, namely Moons [21] and Office Caltech [22]. These datasets are standard and cited with their respective papers [21] and [22], confirming their public availability.
Dataset Splits Yes For the Moons dataset, 'we consider 300 source and target examples for training'. For the Office-Caltech dataset, 'During the training process we consider all the examples from the source domain and half of the examples from the target domain'. The paper also states: 'All the hyper-parameters are tuned according to a grid search on the source and target training instances using a circular validation procedure derived from [21, 25] and described in the supplementary material.'
Hardware Specification No The paper states that 'each example is computed in less than 30s on a standard personal laptop.' This description lacks specific details such as CPU, GPU, or memory specifications.
Software Dependencies No The paper does not specify any software dependencies, libraries, or their version numbers used for the implementation or experiments.
Experiment Setup Yes For GFK and SA we choose the dimension of the subspace d {3, 6, . . . , 30}, for L1L2 and OTE we set the parameter for entropy regularization in {10 6, 10 5, . . . , 105}, for L1L2 we choose the class related parameter η {10 5, 10 4, . . . , 102}, for all our methods we choose λT , λγ {10 3, 10 2, . . . , 100}. For the image editing task, specific λT and λγ values are provided: '(λT = 10 2, λT = 103 for respectively the linear and kernel versions, and λγ = 10 7 for both cases).'