Metric Learning in Optimal Transport for Domain Adaptation

Authors: Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the effectiveness of this original approach.
Researcher Affiliation Academia Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, F-42023, SAINT-ETIENNE, France
Pseudocode Yes Algorithm 1 MLOT
Open Source Code Yes The code of the 10 methods is available2, together with the datasets, the code for the cross-validation that recreates Table 1, and the code that produces automatically Figures 1 and 4. Footnote 2: https://github.com/Hv0nnus/MLOT
Open Datasets Yes We use the Office-Caltech dataset [Gong et al., 2012] [...] We also use the Office31 dataset [Saenko et al., 2010]
Dataset Splits Yes In unsupervised DA, there is no target label and it is impossible to use the classical cross-validation procedure to choose the best hyper-parameters. To fairly compare methods, we take inspiration from the work of [Zhong et al., 2010] and apply the following strategy for all methods. We first assign pseudo-labels to the target points (using the considered method) and then use these target labels to re-assign labels to the source data, using a basis DA algorithm.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions implementing a differentiable version of MLOT 'using the Py Torch framework' but does not specify the version number of PyTorch or any other software dependencies with their versions.
Experiment Setup Yes MLOT is parameterized by 5 hyper-parameters: the three regularization parameters (λe, λc, λl) which control the trade-off between each term in Eq. (10), the number of dimensions kept by the PCA (d) and the number of iterations (N). Note that in these experiments, we used arbitrarily LMNN [Weinberger and Saul, 2009] to learn Ls in the term Ωl(Ls). Therefore, an additional parameter has to be tuned corresponding to the margin used in this metric learning algorithm. [...] we set the hyper-parameters of MLOT as follows: λl = 1, N = 10, margin LMNN = 10, d = 70; and we tune λe and λc.