Low-Rank Similarity Metric Learning in High Dimensions
Authors: Wei Liu, Cun Mu, Rongrong Ji, Shiqian Ma, John Smith, Shih-Fu Chang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions. |
| Researcher Affiliation | Collaboration | IBM T. J. Watson Research Center Columbia University Xiamen University The Chinese University of Hong Kong {weiliu,jsmith}@us.ibm.com cm3052@columbia.edu sfchang@ee.columbia.edu rrji@xmu.edu.cn sqma@se.cuhk.edu.hk |
| Pseudocode | Yes | Algorithm 1 Low-Rank Similarity Metric Learning |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We carry out the experiments on four benchmark datasets including two document datasets Reuters-28 and TDT2-30 (Cai, He, and Han 2011), and two image datasets UIUCSports (Li and Fei-Fei 2007) and UIUC-Scene (Lazebnik, Schmid, and Ponce 2006). |
| Dataset Splits | Yes | On Reuters-28 and TDT2-30, we select 5 C up to 30 C samples for training such that each category covers at least one sample; we pick up the same number of samples for cross-validation; the rest of samples are for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | To run our proposed method LRSML, we fix ϵ = 0.1, ρ = 1, and find that τ = 0.01 makes the linearized ADMM converge within T = 1, 000 iterations on all datasets. |