Concept-based Explanations for Out-of-Distribution Detectors

Authors: Jihye Choi, Jayaram Raghuram, Ryan Feng, Jiefeng Chen, Somesh Jha, Atul Prakash

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments to evaluate the proposed method and show that: 1) the learned concepts satisfy the desiderata of completeness and separability across popular off-the-shelf OOD detectors and real-world datasets. 2) the learned concepts can be combined with a Shapley value to provide insightful visual explanations that can help understand the predictions of an OOD detector. The code for our work can be found at https://github.com/ jihyechoi77/concepts-for-ood.
Researcher Affiliation Academia 1University of Wisconsin Madison 2University of Michigan.
Pseudocode Yes Algorithm 1 Learning concepts for OOD detector INPUT: Entire training set Dtr = {Dtr in, Dtr out}, entire validation set Dval = {Dval in , Dval out}, classifier f, detector Dγ. INITIALIZE: Concept vectors C = [c1 cm] and parameters of the network g. OUTPUT: C and g.
Open Source Code Yes The code for our work can be found at https://github.com/ jihyechoi77/concepts-for-ood.
Open Datasets Yes For the ID dataset, we use Animals with Attributes (Aw A) (Xian et al., 2018) with 50 animal classes, and split it into a train set (29841 images), validation set (3709 images), and test set (3772 images). We use the MSCOCO dataset (Lin et al., 2014) as the auxiliary OOD dataset Dtr out for training and validation.
Dataset Splits Yes For the ID dataset, we use Animals with Attributes (Aw A) (Xian et al., 2018) with 50 animal classes, and split it into a train set (29841 images), validation set (3709 images), and test set (3772 images).
Hardware Specification Yes We ran all our experiments with Tensorflow, Keras and NVDIA Ge Force RTX 2080Ti GPUs.
Software Dependencies No The paper mentions using 'Tensorflow' and 'Keras' but does not specify their version numbers, which is required for reproducibility.
Experiment Setup Yes Hyperparameters for Concept Learning. Throughout the experiments, we fix the number of concepts to m = 100 (unless specifically mentioned otherwise), and following the implementation of (Yeh et al., 2020), we set λexpl = 10 and g to be a two-layer fully-connected neural network with 500 neurons in the hidden layer.