Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions

Authors: Ahmed Alaa, Mihaela Van Der Schaar

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that DJ performs competitively compared to existing Bayesian and non-Bayesian regression baselines.
Researcher Affiliation Academia Ahmed M. Alaa 1 Mihaela van der Schaar 1 2 1UCLA 2Cambridge University.
Pseudocode Yes Algorithm 1 The Discriminative Jackknife
Open Source Code No The paper does not provide an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes on 4 UCI benchmark datasets for regression: yacht hydrodynamics (Yacht), Boston housing (Housing), energy efficiency (Energy) and naval propulsion (Naval) (Dua & Graff, 2017). and reference Dua, D. and Graff, C. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
Dataset Splits No The paper mentions '80% of the data for training and 20% for testing' but does not specify a separate validation split or explicit cross-validation setup details in the main text.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'Adam optimizer with default settings' but does not provide specific software dependencies with version numbers.
Experiment Setup Yes In all experiments, we fit a 2-layer feed-forward neural network with 100 hidden units and compute the DJ confidence intervals using the post-hoc procedure in Algorithm 1. and We use a 2-layer neural network model with 100 hidden units, Tanh activation functions, MSE loss, and a single set of learning hyper-parameters for all baselines (1000 epochs with 100 samples per minibatch, and an Adam optimizer with default settings).