Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

Authors: Dan Ley, Umang Bhatt, Adrian Weller7390-7398

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that δ-CLUE, -CLUE, and GLAM-CLUE all address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners. We perform experiments on 3 datasets to validate our methods: UCI Credit classification (Dua and Graff 2017), MNIST image classification (Le Cun 1998) and Synbols image classification (Lacoste et al. 2020).
Researcher Affiliation Academia Dan Ley1, Umang Bhatt1, Adrian Weller1,2 1University of Cambridge, UK 2The Alan Turing Institute, UK
Pseudocode Yes Algorithm 1: -CLUE (simultaneous) and Algorithm 2: GLAM-CLUE (Training Step) are present in the paper.
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We perform experiments on 3 datasets to validate our methods: UCI Credit classification (Dua and Graff 2017), MNIST image classification (Le Cun 1998) and Synbols image classification (Lacoste et al. 2020).
Dataset Splits No The paper mentions using 'training data' and 'test data' for experiments, but does not provide specific details on dataset split percentages (e.g., train/validation/test ratios or sample counts) needed for reproduction.
Hardware Specification No The paper mentions 'CPU time' in Table 2, but does not provide specific details on the hardware used for experiments (e.g., specific CPU or GPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions models like VAEs and BNNs, and refers to general methods or algorithms, but does not list specific software libraries or packages with their version numbers (e.g., TensorFlow, PyTorch, scikit-learn versions) that were used.
Experiment Setup Yes Batch size: 8 most uncertain MNIST digits. Learning rate: 0.1. Iterations: 30. and Batch size: 6000 (all 4s in training set). Learning rate: 0.1. Iterations: 30.