Explaining Deep Neural Network Models with Adversarial Gradient Integration

Authors: Deng Pan, Xin Li, Dongxiao Zhu

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform experiments attempting to answer the following questions: 1) does AGI output meaningful interpretations for classifying the true class? 2) does class subsampling compromise the performance? 3) does individual AGI give reasonable interpretation for discriminating the true class against a false class? and 4) does AGI pass sanity checks. All experiments are conducted using Image Net dataset.
Researcher Affiliation Academia Deng Pan , Xin Li and Dongxiao Zhu Department of Computer Science, Wayne State University, USA {pan.deng, xinlee, dzhu}@wayne.edu
Pseudocode Yes Algorithm 1: Individual AGI(f, x, i, ϵ, m) and Algorithm 2: AGI(f, x, ϵ, k, m) are presented.
Open Source Code Yes Code is available from https://github.com/pd90506/AGI.
Open Datasets Yes All experiments are conducted using Image Net dataset.
Dataset Splits No The paper mentions evaluating on "1000 test examples" from ImageNet, but it does not specify the training or validation split percentages or absolute sample counts for each split. It relies on pre-trained models rather than training models from scratch with explicit splits.
Hardware Specification Yes For Inception V3, setting the max ascending step = 20, and sample size = 20, it will cost 15 seconds to interpret a single 224 224 color image on a computer with Nvidia GTX 1080 GPU.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes Regarding parameter settings, we set the step size ϵ = 0.05, and the class subsampling size for Image Net to 20. For Inception V3, setting the max ascending step = 20, and sample size = 20