Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

Authors: Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory sayres

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments using TCAV to gain insights and reveal dataset biases in widely-used neural network models and with a medical application (diabetic retinopathy), confirming our findings with a domain expert. We conduct human subject experiments to quantitatively evaluate feature-based explanations and to contrast with TCAV.
Researcher Affiliation Industry Correspondence to: Been Kim <beenkim@google.com>.
Pseudocode No The paper describes its methods in prose and mathematical formulations but does not include pseudocode or an algorithm block.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that code for the described methodology is publicly available.
Open Datasets Yes We applied TCAV for two widely used networks (Szegedy et al., 2015; 2016). We tried various types of CAVs, including color, texture, objects, gender and race. Note that none of these concepts were in the set of the network s class labels; instead all were collected from (Bau et al., 2017; Huang et al., 2007; Russakovsky et al., 2015) or a popular image search engine.
Dataset Splits No The paper mentions a "held out test set of 1/3 the size of the training set" but does not specify a distinct validation set split or percentages for all three (train/val/test) for reproduction.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not list specific versions for any software components or libraries used in the experiments.
Experiment Setup No The paper describes the general approach and data usage but does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes, epochs) for the training or evaluation processes.