GANMEX: One-vs-One Attributions using GAN-based Model Explainability
Authors: Sheng-Min Shih, Pin-Ju Tien, Zohar Karnin
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showed that GANMEX baselines improved the saliency maps and led to stronger performance on multiple evaluation metrics over the existing baselines. We tested the one-vs-one attribution on three multi-class datasets MNIST, SVHN, and CIFAR10. To evaluate the saliency methods for one-vs-one attribution, we leverage the Benchmarking Attribution Methods (BAM) dataset (Yang & Kim, 2019). |
| Researcher Affiliation | Industry | 1Amazon. Correspondence to: Sheng-Min Shih <shengminshih@gmail.com>, Pin-Ju Tien <pinju.tien@gmail.com>, Zohar Karnin <zkarnin@gmail.com>. |
| Pseudocode | No | The paper describes algorithmic steps but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the described methodology. |
| Open Datasets | Yes | In what follows we experiment with the datasets MNIST (Le Cun & Cortes, 2010), Street-View House Numbers (SVHN) (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), apple2orange (Zhu et al., 2017), and BAM (Yang & Kim, 2019). |
| Dataset Splits | No | The paper mentions using training sets (e.g., 'training set XT') and discusses training, but does not specify the exact percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiments. |
| Experiment Setup | Yes | We provide more implementation details including hyper-parameters in Appendix A.2. |