Self-Interpretable Model with Transformation Equivariant Interpretation
Authors: Yipei Wang, Xiaoqian Wang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments on image classification tasks with and without transformations. The experiment results demonstrate the high-quality interpretations and the validity of SITE. |
| Researcher Affiliation | Academia | Yipei Wang, Xiaoqian Wang Elmore School of Electrical and Computer Engineering Purdue University West Lafayette, IN, 47907 wang4865@purdue.edu, joywang@purdue.edu |
| Pseudocode | No | The paper provides mathematical formulations and descriptive text for its model, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | We will provide the code and instructions at request. |
| Open Datasets | Yes | First, we implement SITE on MNIST dataset. and For CIFAR-10, input x is fed to the feature extractor F1. Also, in the questions for reviewers: The data used in the experiments are public available. |
| Dataset Splits | No | The test is performed on the untransformed validation set. and Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See the Appendix. The specific split details are deferred to the Appendix, which is not included in the provided text. |
| Hardware Specification | No | The paper states: Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See the beginning of Section 4. However, the beginning of Section 4 and the provided text do not specify any hardware details. |
| Software Dependencies | No | The applications of various post-hoc methods are implemented through the Torch Ray toolkit. This mentions a toolkit but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | No | The paper states: Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See the Appendix. However, these specific details are not present in the provided main text. |