On Regularizing Rademacher Observation Losses

Authors: Richard Nock

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. and 5 and 6 respectively present experiments, and conclude.
Researcher Affiliation Collaboration Richard Nock Data61, The Australian National University & The University of Sydney richard.nock@data61.csiro.au
Pseudocode Yes Algorithm 1 Ω-R.ADABOOST and Algorithm 2 Ω-WL
Open Source Code No Footnote 4 states 'Code available at: http://users.cecs.anu.edu.au/ rnock/', which points to a personal homepage rather than a specific code repository for the methodology.
Open Datasets Yes The complete results aggregate experiments on twenty (20) domains, all but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain Give me some credit )
Dataset Splits Yes The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9') used for the implementation or experiments.
Experiment Setup Yes All algorithms are run for a total of T = 1000 iterations, and at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept. and To obtain very sparse solutions for regularized-ADABOOST, we pick its ω (β in [Xi et al., 2009]) in {10-4, 1, 104}. and The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain.