Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Efficient Learning of Mahalanobis Metrics for Ranking

Authors: Daryl Lim, Gert Lanckriet

ICML 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our proposed method, we conducted two sets of experiments.
Researcher Affiliation Academia Daryl K. H. Lim EMAIL Department of Electrical and Computer Engineering, University of California, San Diego, CA 92093 USA
Pseudocode Yes Algorithm 1 Symmetric gradient update
Open Source Code No The paper does not provide a link to its own source code or explicitly state that it is open-source. It references a third-party code for LMNN.
Open Datasets Yes Image Net (Deng et al., 2009), CAL10K (Tingle et al., 2010) and Magna Tagatune (Law et al., 2009)... covertype dataset from the UCI repository
Dataset Splits Yes For each method, the hyperparameters with the best MAP performance on the validation set were selected.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud resources) used for running the experiments.
Software Dependencies No The paper mentions various algorithms and implementations used (e.g., MLR-ADMM, LMNN 2.4 code) but does not provide specific version numbers for its own software dependencies or general software environment required for replication.
Experiment Setup Yes For both variants of FRML, the trade-off parameter λ was fixed at 0.1 and we used a minibatch of size 5.