(Almost) No Label No Cry

Authors: Giorgio Patrini, Richard Nock, Paul Rivera, Tiberio Caetano

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are provided on fourteen domains, whose size ranges up to 300K observations. They display that our algorithms are scalable and tend to consistently outperform the state of the art in LLP. Moreover, in many cases, our algorithms compete with or are just percents of AUC away from the Oracle that learns knowing all labels. ... Section (3) presents experiments. ... We compare LMM, AMM (Fφ = logistic loss) to the original MM [17], Inv Cal [11], conv SVM and alter SVM [16] (linear kernels). ... The testing metric is the AUC. ... Small and large domains experiments We convert 10 small domains [19] (m 1000) and 4 bigger ones (m > 8000) from UCI[26] into the LLP framework. ... Table 2: Small domains results. ... Table 3: AUCs on big domains...
Researcher Affiliation Collaboration Australian National University1, NICTA2, University of New South Wales3, Ambiata4 Sydney, NSW, Australia {name.surname}@anu.edu.au
Pseudocode Yes Algorithm 1 Laplacian Mean Map (LMM) ... Algorithm 2 Alternating Mean Map (AMMOPT)
Open Source Code No AMM/LMM/MM are implemented in R. Code for Inv Cal and SVM is [16].
Open Datasets Yes Small and large domains experiments We convert 10 small domains [19] (m 1000) and 4 bigger ones (m > 8000) from UCI[26] into the LLP framework.
Dataset Splits Yes We performe 5-folds nested CV comparisons on the 10 domains = 50 AUC values for each algorithm.
Hardware Specification Yes Tests are done on a 4-core 3.2GHz CPUs Mac with 32GB of RAM.
Software Dependencies No The paper states 'AMM/LMM/MM are implemented in R' but does not provide specific version numbers for R or any other software dependencies, libraries, or solvers.
Experiment Setup Yes Input Sj, ˆπj, j [n]; γ > 0 (7); w (7); V (8); permissible φ (2); λ > 0; ... LMM/AMMG, LMM/AMMG,s, LMM/AMMnc respectively denote v G,s with s = 1, v G,s with s learned on cross validation (CV; validation ranges indicated in [19]) and vnc.