Towards Automatic Concept-based Explanations

Authors: Amirata Ghorbani, James Wexler, James Y. Zou, Been Kim

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our systematic experiments demonstrate that ACE discovers concepts that are human-meaningful, coherent and important for the neural network s predictions.
Researcher Affiliation Collaboration Amirata Ghorbani Stanford University amiratag@stanford.edu James Wexler Google Brain jwexler@google.com James Zou Stanford University jamesz@stanford.edu Been Kim Google Brain beenkim@google.com Work done while interning at Google Brain.
Pseudocode No The paper provides a visual representation of the ACE algorithm's steps in Figure 1, accompanied by descriptive text. However, it does not include a formal pseudocode block or a section explicitly labeled 'Algorithm'.
Open Source Code Yes 2 Implementation available: https://github.com/amiratag/ACE
Open Datasets Yes As an experimental example, we use ACE to interpret the widely-used Inception-V3 model [36] trained on ILSVRC2012 data set (Image Net) [32]. [32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211 252, 2015.
Dataset Splits Yes We use 1000 randomly selected Image Net validation images from the same 100 classes.
Hardware Specification No The paper mentions using the Inception-V3 model and discusses computational cost, but it does not specify any particular hardware components such as GPU models, CPU types, or cloud computing instances used for training or experimentation.
Software Dependencies No The paper mentions using 'SLIC' for segmentation and states that 'All the code is written in Python using TensorFlow' in Appendix A. However, it does not provide specific version numbers for any of these software components (SLIC, Python, TensorFlow).
Experiment Setup Yes We select a subset of 100 classes out of the 1000 classes in the data set to apply ACE . In our experiments on Image Net classes, 50 images was sufficient to extract enough examples of concepts; possibly because the concepts are frequently present in these images. The segmentation step is performed using SLIC [2] due to its speed and performance (after examining several super-pixel methods [10, 26, 41]) with three resolutions of 15, 50, and 80 segments for each image. For our similarity metric, we examined the euclidean distance in several layers of the Image Net trained Inception-V3 architecture and chose the mixed_8 layer. K-Means clustering is performed and outliers are removed using euclidean distance to the cluster centers.