Iterative Search Attribution for Deep Neural Networks

Authors: Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Jason Xue, Jun Shen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experimental results show that our method has superior interpretability in image recognition tasks compared with stateof-the-art baselines. Our code is available at: https://github.com/LMBTough/ISA
Researcher Affiliation Collaboration 1School of Electrical and Computer Engineering, University of Sydney, Sydney, NSW, Australia 2Faculty of Computer Science & Information Technology, University of Malaya 3Suzhou Yierqi, Suzhou, China 4Data61, CSIRO, Sydney, NSW, Australia 5University of Wollongong, Australia.
Pseudocode Yes Algorithm 1 Iterative Search Attribution (Appendix J)
Open Source Code Yes Our code is available at: https://github.com/LMBTough/ISA
Open Datasets Yes In the experiment, we employ the widely used Image Net (Deng et al., 2009) dataset.
Dataset Splits No The paper mentions selecting 1000 samples for evaluation, but it does not specify explicit training/validation/test splits (e.g., percentages or counts for training and validation sets) needed to reproduce the training of the models (Inception-v3, ResNet-50, VGG16) used in the experiments. It implicitly uses pre-trained models.
Hardware Specification Yes We perform the experiments on a platform with a single Nvidia RTX3090 GPU.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Specifically, we set the step size to be 5000, ascent step T1 and descent step T2 to be 8 of each, learning rate to 0.002, and S to 1.1.