A PAC-Bayesian Generalization Bound for Equivariant Networks

Authors: Arash Behboodi, Gabriele Cesa, Taco S. Cohen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.
Researcher Affiliation Collaboration Arash Behboodi Qualcomm AI Research, Amsterdam behboodi@qti.qualcomm.com Gabriele Cesa Qualcomm AI Research, Amsterdam AMLab, University of Amsterdam gcesa@qti.qualcomm.com Taco Cohen Qualcomm AI Research, Amsterdam tacos@qti.qualcomm.com
Pseudocode No The paper describes mathematical derivations and models but does not include any pseudocode or algorithm blocks.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] The code and the data are proprietary
Open Datasets Yes We have used datasets based on natural images and synthetic data. [...] we perform a larger study on the transformed MNIST datasets
Dataset Splits No The paper mentions training until a certain margin is reached, but does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts for each split).
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
Software Dependencies No The paper states that training details were specified but does not provide specific software names with version numbers, such as libraries or frameworks.
Experiment Setup Yes Models are trained until 99% of the training set is correctly classified with at least a margin γ. We used γ = 10 in the synthetic datasets and γ = 2 in the image ones.