Controlling Multiple Errors Simultaneously with a PAC-Bayes Bound

Authors: Reuben Adams, John Shawe-Taylor, Benjamin Guedj

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 6 outlines positive empirical results from using our bound as a training objective for neural networks and Section 7 gives perspectives for follow-up work..
Researcher Affiliation Academia Reuben Adams Department of Computer Science University College London reuben.adams.20@ucl.ac.uk; John Shawe-Taylor Department of Computer Science University College London j.shawe-taylor@ucl.ac.uk; Benjamin Guedj Department of Computer Science, University College London and Inria b.guedj@ucl.ac.uk
Pseudocode Yes Algorithm 1: Calculating a posterior with minimal bound on the total risk.
Open Source Code Yes Code available here: https://github.com/reubenadams/PAC-Bayes-Control
Open Datasets Yes We use binarised versions of MNIST, and HAM10000 Tschandl [2018]. For MNIST, we use the conventional training set of size 60000 as the prior set, and the conventional test set of size 10000 as the certification set.
Dataset Splits Yes For HAM10000 we pool the conventional train, validation and test sets together and then split 50-50 to obtain prior and certification sets each of size 5860.
Hardware Specification No The paper does not provide specific hardware details for running the experiments. The NeurIPS checklist explicitly states: 'The compute resources required are not stated as they are negligible.'
Software Dependencies No The paper mentions using MLPs, SGD, and cross-entropy loss, implying the use of deep learning frameworks, but it does not specify any software names with version numbers.
Experiment Setup Yes We take H to be two-layer MLPs with 784, 100 and 2 units in the input, hidden and output layers, respectively. In both cases we use SGD with learning rate 0.01 to minimise the cross-entropy loss, using a portion of the prior set as a validation set. For MNIST we train the MLP for 20 epochs... For HAM10000 we train the MLP for 5 epochs...