PAC-Bayesian AUC classification and scoring

Authors: James Ridgway, Pierre Alquier, Nicolas Chopin, Feng Liang

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now compare our PAC-Bayesian approach (computed with EP) with Bayesian logistic regression (to deal with non-identifiable cases), and with the rankboost algorithm [Freund et al., 2003] on different datasets1; note that Cortes and Mohri [2003] showed that the function optimised by rankbook is AUC. and Table 1: Comparison of AUC.
Researcher Affiliation Academia James Ridgway CREST and CEREMADE University Dauphine james.ridgway@ensae.fr Pierre Alquier CREST (ENSAE) pierre.alquier@ucd.ie Nicolas Chopin CREST (ENSAE) and HEC Paris nicolas.chopin@ensae.fr Feng Liang University of Illinois at Urbana-Champaign liangf@illinois.edu
Pseudocode Yes Algorithm 1 Tempering SMC
Open Source Code No No explicit statement about providing open-source code for the methodology described in this paper.
Open Datasets Yes All available at http://archive.ics.uci.edu/ml/
Dataset Splits Yes As mentioned in Section 3, we set the prior hyperparameters by maximizing the evidence, and we use cross-validation to choose γ.
Hardware Specification No No mention of specific hardware used for experiments.
Software Dependencies No No specific software dependencies with version numbers are provided.
Experiment Setup Yes As mentioned in Section 3, we set the prior hyperparameters by maximizing the evidence, and we use cross-validation to choose γ. To ensure convergence of EP, when dealing with difficult sites, we use damping [Seeger, 2005].