Learning under Model Misspecification: Applications to Variational and Ensemble methods
Authors: Andres Masegosa
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with Bayesian neural networks illustrate these findings. |
| Researcher Affiliation | Academia | Andrés R. Masegosa University of Almería andresma@ual.es |
| Pseudocode | No | The paper describes algorithms and refers to appendices for details but does not include structured pseudocode or algorithm blocks in the main text. |
| Open Source Code | Yes | The code to reproduce the results is available in https://github.com/PGM-Lab/PAC2BAYES. |
| Open Datasets | Yes | We performed the empirical evaluation on two data sets, Fashion-MNIST [58] and CIFAR-10 [30] |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test split percentages or counts in the main text. It mentions 'Full details in Appendix D' but those details are not provided within the main paper's text. |
| Hardware Specification | No | No specific hardware (e.g., GPU models, CPU types, memory amounts) used for running experiments is explicitly mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow x.x, PyTorch x.x) are provided. |
| Experiment Setup | No | The paper mentions 'Full details in Appendix D' for the experimental setup but does not provide specific hyperparameter values or training configurations in the main text. |