Surrogate Functions for Maximizing Precision at the Top

Authors: Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conclude with experimental results comparing our algorithms with stateof-the-art cutting plane and stochastic gradient algorithms for maximizing prec@k. Our experiments reveal three interesting insights into the problem of prec@k maximization...
Researcher Affiliation Collaboration Purushottam Kar T-PURKAR@MICROSOFT.COM Microsoft Research, INDIA Harikrishna Narasimhan HARIKRISHNA@CSA.IISC.ERNET.IN Indian Institute of Science, Bangalore, INDIA Prateek Jain PRAJAIN@MICROSOFT.COM Microsoft Research, INDIA
Pseudocode Yes Algorithm 1 PERCEPTRON@K-AVG, Algorithm 2 PERCEPTRON@K-MAX, Algorithm 3 SGD@K-AVG, Algorithm 4 Subgradient calculation for ℓavg prec@k( )
Open Source Code No The paper does not include an unambiguous statement about releasing source code or a direct link to a code repository for the described methodology.
Open Datasets Yes Datasets: We evaluated our methods on 7 publicly available benchmark datasets: a) PPI, b) KDD Cup 2008, c) Letter, d) Adult, e) IJCNN, f) Covertype, and g) Cod-RNA.
Dataset Splits No The paper states 'We used 70% of the data for training and the rest for testing.' but does not explicitly mention a separate validation split or its proportion.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper only states 'All methods were implemented in C.' but does not list specific software packages, libraries, or solvers with version numbers required to replicate the experiments.
Experiment Setup Yes The perceptron and SGD methods were given a maximum of 25 passes over the data with a batch length of 500.