Local Path Integration for Attribution
Authors: Peiyu Yang, Naveed Akhtar, Zeyi Wen, Ajmal Mian
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With extensive experiments on the validation set of Image Net 2012 (Russakovsky et al. 2015) using two visual classification models, we show that the proposed method of Local Path Integration (LPI) consistently outperforms the existing path-based attribution methods. We also contribute an evaluation metric for reliable performance estimation of the attribution methods. |
| Researcher Affiliation | Academia | Peiyu Yang1, Naveed Akhtar1, Zeyi Wen*2,3, Ajmal Mian1 1 The University of Western Australia 2 Hong Kong University of Science and Technology (Guangzhou) 3 Hong Kong University of Science and Technology peiyu.yang@research.uwa.edu.au, naveed.akhtar@uwa.edu.au, wenzeyi@ust.hk, ajmal.mian@uwa.edu.au |
| Pseudocode | No | The paper includes mathematical equations describing the method but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/ypeiyu/LPI. |
| Open Datasets | Yes | We apply Diff ID to evaluate the performance of attribution methods on VGG-16 (Simonyan and Zisserman 2015) and Res Net-34 (He et al. 2016) on Image Net 2012 validation set (Russakovsky et al. 2015). |
| Dataset Splits | Yes | With extensive experiments on the validation set of Image Net 2012 (Russakovsky et al. 2015) using two visual classification models, we show that the proposed method of Local Path Integration (LPI) consistently outperforms the existing path-based attribution methods. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or processor types used for running experiments. |
| Software Dependencies | No | The paper mentions models like VGG-16 and Res Net-34 but does not list specific versions of ancillary software or libraries (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | For IG, we segment the linear path with 20 steps for the integral. In AGI, the reference is generated by the PGD attack (Madry et al. 2018) with 20 steps and a single random target class. For both LPI and EG, we employ 20 references for each input with one random step. In LPI, we empirically divide the learned distributions into 9 and 7 neighborhoods for VGG-16 and Res Net-34, respectively. |