Improving model calibration with accuracy versus uncertainty optimization

Authors: Ranganath Krishnan, Omesh Tickoo

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate our approach yields better model calibration than existing methods on large-scale image classification tasks under distributional shift.
Researcher Affiliation Industry Ranganath Krishnan Intel Labs ranganath.krishnan@intel.com Omesh Tickoo Intel Labs omesh.tickoo@intel.com
Pseudocode Yes Algorithm 1 SVI-Av UC optimization
Open Source Code Yes We have made the code 3 available to facilitate probabilistic deep learning community to evaluate and improve model calibration for various other baselines. 3https://github.com/Intel Labs/AVUC
Open Datasets Yes We use Res Net-50 and Res Net-20 [46] DNN architectures on Image Net [47] and CIFAR10 [48] datasets respectively.
Dataset Splits No The paper mentions using a 'held-out validation set' and 'test data (in-distribution)' but does not provide specific percentages or sample counts for these splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper cites PyTorch as a deep learning library, but does not explicitly list its specific version or other software dependencies with their version numbers needed for replication.
Experiment Setup Yes We provide details of our model implementations and hyperparameters for SVI, SVI-TS, SVI-Av UC, SVI-Av UTS and Radial BNN in Appendix B.