A Functional Information Perspective on Model Interpretation
Authors: Itai Gat, Nitay Calderon, Roi Reichart, Tamir Hazan
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we show that our method surpasses existing interpretability sampling-based methods on various data signals such as image, text, and audio. |
| Researcher Affiliation | Academia | 1Technion Israel Institute of Technology. |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/nitaytech/ Functional Explanation. |
| Open Datasets | Yes | We use the Google speech commands dataset (Warden, 2018).We use the CIFAR10 to evaluate our method quantitatively (Krizhevsky et al., 2009).We evaluate our method on the IMDB dataset (Maas et al., 2011) |
| Dataset Splits | Yes | Audio. This data was split into train (80%), validation (10%) and test (10%) sets.Vision. The dataset is constructed of 50,000 images in the train set and 10,000 images in the validation set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running the experiments were provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1') were explicitly listed in the paper. |
| Experiment Setup | No | No specific experimental setup details, such as hyperparameter values (learning rate, batch size, number of epochs, optimizer settings), were explicitly provided in the main text of the paper. |