Enhanced Regularizers for Attributional Robustness
Authors: Anindya Sarkar, Anirban Sarkar, Vineeth N Balasubramanian2532-2540
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted a comprehensive suite of experiments and ablation studies, which we report in this section and in Sec . We report results with our method on 4 benchmark datasets i.e. Flower (Nilsback and Zisserman 2006), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), MNIST and GTSRB (Stallkamp et al. 2012). |
| Researcher Affiliation | Collaboration | Anindya Sarkar, Anirban Sarkar, Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad anindya.sarkar@cse.iith.ac.in, cs16resch11006@iith.ac.in, vineethnb@ith.ac.in |
| Pseudocode | Yes | An algorithm for our overall methodology is also presented in the Appendix due to space constraints. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | We report results with our method on 4 benchmark datasets i.e. Flower (Nilsback and Zisserman 2006), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), MNIST and GTSRB (Stallkamp et al. 2012). |
| Dataset Splits | No | The paper mentions training data and test sets but does not explicitly provide details on validation splits (e.g., percentages, sample counts, or clear references to predefined validation sets). |
| Hardware Specification | No | The paper mentions 'GPU servers' for provision of work but does not provide specific details such as exact GPU models, CPU models, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper describes the network architectures used but does not provide specific software dependency details with version numbers (e.g., Python, PyTorch/TensorFlow versions, or other libraries). |
| Experiment Setup | Yes | We used a regularizer coefficient λ = 1.0 and m = 50 as the number of steps used for computing IG (Eqn 1) across all experiments. Note that our adversarial and attributional attack configurations were kept fixed across ours and baseline methods. Please refer the Appendix for more details on training hyperparameters and attack configurations for specific datasets. |