On Margins and Generalisation for Voting Classifiers

Authors: Felix Biggs, Valentina Zantedeschi, Benjamin Guedj

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Empirical evaluation In this section we empirically validate our results against existing PAC-Bayesian and margin bounds on several classification datasets from UCI (Dua and Graff, 2017), LIBSVM1 and Zalando (Xiao et al., 2017). Since our main result in Theorem 2 is not associated with any particular algorithm, we use θ outputted from PAC-Bayes-derived algorithms to evaluate this result against other margin bounds (Figure 1) and PAC-Bayes bounds (Figure 2). We then compare optimisation of our secondary result Theorem 3 with optimising those PAC-Bayes bounds directly (Figure 3).
Researcher Affiliation Collaboration Felix Biggs Department of Computer Science University College London and Inria London contact@felixbiggs.com Valentina Zantedeschi Service Now Research, University College London and Inria London vzantedeschi@gmail.com Benjamin Guedj Department of Computer Science University College London and Inria London b.guedj@ucl.ac.uk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code for reproducing the results is available at https://github.com/vzantedeschi/dirichlet-margin-bound.
Open Datasets Yes In this section we empirically validate our results against existing PAC-Bayesian and margin bounds on several classification datasets from UCI (Dua and Graff, 2017), LIBSVM1 and Zalando (Xiao et al., 2017).
Dataset Splits Yes We reserve 50% of the training data as a training set, and 50% as a validation set.
Hardware Specification No The paper states, 'The experiments presented in this paper were carried out using the Grid 5000 testbed,' but does not provide specific hardware details such as GPU/CPU models or memory specifications.
Software Dependencies Yes All experiments were implemented in Python 3.8.10 using PyTorch 1.10.1.
Experiment Setup Yes Our PAC-Bayes objectives are minimised using stochastic gradient descent (Kingma and Ba, 2015), using the Adam optimizer with a learning rate of 0.001 and weight decay 0.0001 over 200 epochs.