Geometric Mean Metric Learning
Authors: Pourya Zadeh, Reshad Hosseini, Suvrit Sra
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, on standard benchmark datasets, our closed-form solution consistently attains higher classification accuracy. Validation. We consider multi-class classification using the learned metrics, and validate GMML by comparing it against widely used metric learning methods. GMML runs up to three orders of magnitude faster while consistently delivering equal or higher classification accuracy. |
| Researcher Affiliation | Academia | Pourya Habib Zadeh P.HABIBZADEH@UT.AC.IR Reshad Hosseini RESHAD.HOSSEINI@UT.AC.IR School of ECE, College of Engineering, University of Tehran, Tehran, Iran Suvrit Sra SUVRIT@MIT.EDU Massachusetts Institute of Technology, Cambridge, MA, USA |
| Pseudocode | Yes | Algorithm 1 Geometric Mean Metric Learning |
| Open Source Code | No | The paper does not provide an unambiguous statement of releasing the source code for the methodology described, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | The datasets are obtained from the well-known UCI repository (Asuncion & Newman, 2007). The datasets in this experiment are Isolet, Letters (Asuncion & Newman, 2007), MNIST3 (Le Cun et al., 1998) and USPS (Le Cun et al., 1990). |
| Dataset Splits | Yes | Figure 2 reports 40 runs of a two-fold splitting of the data. In each run, the data is randomly divided into two equal sets. We use five-fold cross-validation for choosing the best parameter t. Figure 4 reports the average classification error over 5 runs of random splitting of the data. We use three-fold cross-validation for adjusting the parameter t. |
| Hardware Specification | Yes | All methods were implemented on MATLAB R2014a (64-bit), and the simulations were run on a personal laptop with an Intel Core i5 (2.5Ghz) processor under the OS X Yosemite operating system. |
| Software Dependencies | Yes | All methods were implemented on MATLAB R2014a (64-bit), and the simulations were run on a personal laptop with an Intel Core i5 (2.5Ghz) processor under the OS X Yosemite operating system. |
| Experiment Setup | Yes | We choose k = 5, and estimate a full-rank matrix A in all methods. The regularization parameter λ is set to zero for most of the datasets. We only add a small value of λ when the similarity matrix S becomes singular. For example, since the similarity matrix of the Segment data is near singular, we use the regularized version of our method with λ = 0.1 and A0 equals to the identity matrix. We use five-fold cross-validation for choosing the best parameter t. In the first step the best t is chosen among the values {0.1, 0.3, 0.5, 0.7, 0.9}. |