Two-Stage Metric Learning

Authors: Jun Wang, Ke Sun, Fei Sha, Stéphane Marchand-Maillet, Alexandros Kalousis

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.The experimental results show that it outperforms in a statistical significant manner both metric learning methods and SVM.
Researcher Affiliation Academia Jun Wang JUN.WANG@UNIGE.CH Department of Computer Science, University of Geneva, Switzerland Ke Sun KE.SUN@UNIGE.CH Department of Computer Science, University of Geneva, Switzerland Fei Sha FEISHA@USC.EDU Department of Computer Science, University of Southern California, Los Angeles, CA, USA Stephane Marchand-Maillet STEPHANE.MARCHAND-MAILLET@UNIGE.CH Department of Computer Science, University of Geneva, Switzerland Alexandros Kalousis ALEXANDROS.KALOUSIS@HESGE.CH Department of Business Informatics,University of Applied Sciences,Western Switzerland, Department of Computer Science, University of Geneva, Switzerland
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper refers to code for baseline methods (LMNN, KML) but does not provide concrete access to the source code for the proposed SBFIML method.
Open Datasets Yes We evaluate the performance of SBFIML on ten datasets from the UCI Machine Learning and mldata1 repositories. The details of these datasets are reported in the first column of Table 1. All datasets are preprocessed by standardizing the input features. http://mldata.org/.
Dataset Splits Yes To estimate the classification accuracy we used 5 times 10-fold CV. We select the margin parameter γ from {0.0001, 0.001, 0.01, 0.1} using a 4-fold inner Cross Validation (CV). The selection of an appropriate similarity function is crucial for SBFIML. We choose the similarity function with a 4-fold inner CV from the angular similarity, equation (8), and the Gaussian similarity in equation (7).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., specific GPU models, CPU types, or cloud instance specifications).
Software Dependencies No The paper mentions software like LMNN, KML, GLML, PLML, SBMML, χ2 LMNN, and SVM, but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes The LMNN has one hyper-parameter µ (Weinberger & Saul, 2009). We set it to its default value µ = 1. The SBFIML has two hyper-parameters α and γ. Following LMNN (Weinberger & Saul, 2009), we set the α parameter to 1. We select the margin parameter γ from {0.0001, 0.001, 0.01, 0.1} using a 4-fold inner Cross Validation (CV). For all methods, KML, SBMML, χ2 LMNN and SBFMIL, we set k = 0.1n in all experiments. The triplet constraints are constructed using three same-class and ten different-class nearest neighbors for each learning instance.