On the Robustness of Removal-Based Feature Attributions
Authors: Chris Lin, Ian Covert, Su-In Lee
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results on synthetic and real-world data validate our theoretical results and demonstrate their practical implications, including the ability to increase attribution robustness by improving the model s Lipschitz regularity. |
| Researcher Affiliation | Academia | Chris Lin University of Washington clin25@cs.washington.edu Ian Covert Stanford University icovert@stanford.edu Su-In Lee University of Washington suinlee@cs.washington.edu |
| Pseudocode | No | The paper describes algorithms and methods but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/suinleelab/removal-robustness. |
| Open Datasets | Yes | Next, we demonstrate two practical implications of our findings with the UCI wine quality [14, 24], MNIST [44], CIFAR-10 [42] and Imagenette datasets [19, 36]. |
| Dataset Splits | Yes | The UCI white wine quality dataset [14, 24] consists of 4,898 samples of white wine... Two subsets of 500 samples are randomly chosen as the validation and test set. From the official training set of MNIST and CIFAR-10, 10,000 images are randomly chosen as the validation set. |
| Hardware Specification | Yes | All models were trained with NVIDIA Ge Force RTX 2080 Ti GPUs with 11GB memory. The Res Net-50 networks were trained with NVIDIA Quadro RTX6000 GPUs with 24GB memory. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. It mentions training with 'Adam' and implicitly uses deep learning frameworks (likely PyTorch given the models) but without version details. |
| Experiment Setup | Yes | The architecture consists of 3 hidden layers of width 128 with Re LU activations and was trained with Adam for 50 epochs with learning rate 0.001. ... We trained the model with Adam for 200 epochs with learning rate 0.001. ... We trained Res Net-18 networks for CIFAR-10 from scratch with Adam for 500 epochs with learning rate 0.00005. ... fine-tuned it for Imagenette with Adam for 20 epochs with learning rate 0.0005. |