Discriminative Calibration: Check Bayesian Computation from Simulations and Flexible Classifier

Authors: Yuling Yao, Justin Domke

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate an automated implementation using neural networks and statistically-inspired features, and validate the method with numerical and real data experiments.
Researcher Affiliation Collaboration Yuling Yao Flatiron Institute, New York, NY 10010. yyao@yyao.dev Justin Domke University of Massachusetts, Amherst, MA 01002. domke@cs.umass.edu
Pseudocode Yes Algorithm 1: Proposed method: Discriminative calibration
Open Source Code Yes We share Jax implementation of our binary and multiclass classifier calibration in Github2. https://github.com/yao-yl/Disc Calibration
Open Datasets Yes Next, we apply our calibration to three models from the SBI benchmark [23]: the simple likelihood complex posterior (SLCP), the Gaussian linear, and the Gaussian mixture model. ... [23] Lueckmann, J.-M., Boelts, J., Greenberg, D., Goncalves, P., and Macke, J. (2021). Benchmarking simulation-based inference. In International Conference on Artificial Intelligence and Statistics.
Dataset Splits Yes Randomly split the LS classification examples (t, ϕ) into training and validation sets (all L examples for a given i go to either training or validation); ... We tune the weight of the decay term by a 5 fold cross-validation in the training set on a fixed grid {0.1, 0.01, 0.001, 0.0001}.
Hardware Specification No The paper mentions 'one classification run with roughly one million examples took roughly two hour cpu time on a local laptop' (Appendix B.2), but it does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts.
Software Dependencies No We share the python and Jax implementation of our binary and multiclass classifier calibration in https://github.com/yao-yl/Disc Calibration. No specific version numbers for Python or Jax are provided.
Experiment Setup Yes In the MLP training we include a standard L2 weight decay (i.e., training loss function = cross entropy loss + tuning weight L2 penalization). We tune the weight of the decay term by a 5 fold cross-validation in the training set on a fixed grid {0.1, 0.01, 0.001, 0.0001}. ... we use one-hidden-layer MLP with 64 nodes to parameterize the classifier with the form (11), with additional pre-learned features such as log q(θ|y) added as linear features.