Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

Authors: Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.
Researcher Affiliation Collaboration Wenbo Guo The Pennsylvania State University wzg13@ist.psu.edu Sui Huang Netflix Inc. shuang@netflix.com Yunzhe Tao Columbia University y.tao@columbia.edu Xinyu Xing The Pennsylvania State University xxing@ist.psu.edu Lin Lin The Pennsylvania State University llin@psu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes As a first step, we utilize Keras [2] to train an MLP on MNIST dataset [16] and CNNs to classify clothing images in Fashion-MNIST dataset [34] respectively. ... We also evaluate the explainability of our proposed solution on the VGG16 model [27] trained from Image Net dataset [5].
Dataset Splits No The paper describes using 'bootstrapped samples' and 'randomly selected 10000 data samples' for evaluating the explanation method, but it does not specify the train/validation/test dataset splits used for training the target ML models (MLP, CNNs, VGG16) or its own DMM-MEN model.
Hardware Specification No The paper mentions 'the support of NVIDIA Corporation with the donation of the GPU' but does not specify a concrete GPU model, CPU, memory, or other detailed hardware specifications used for running experiments.
Software Dependencies No The paper mentions using 'Keras' but does not specify a version number or other software dependencies with their versions.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, epochs, optimizer settings) for training the models.