A Psychological Theory of Explainability
Authors: Scott Cheng-Hsin Yang, Nils Erik Tomas Folke, Patrick Shafto
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants predictions of the AI. |
| Researcher Affiliation | Academia | 1Department of Mathematics and Computer Science, Rutgers University Newark, New Jersey, USA 2School of Mathematics, Institute for Advanced Study, New Jersey, USA. |
| Pseudocode | No | The paper describes mathematical formulations and processes but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | All experiments, mathematical models, analysis code, and hypothesis tests were preregistered (https://osf.io/ 4n67p). |
| Open Datasets | Yes | Image Net (Russakovsky et al., 2015), misclassified images drawn from Image Net, and misclassified images drawn from the Natural Adversarial Image Net dataset (Hendrycks et al., 2021). |
| Dataset Splits | Yes | To compare the predictive performance of the full model to the alternatives, we used leave-one-out cross-validation (LOO-CV) to control for model complexity. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions the use of a 'Res Net-50 model' but does not provide any specific software versions for libraries, frameworks, or other dependencies used in the experiments. |
| Experiment Setup | No | The paper describes the mathematical formulations of its models and the method used to fit a parameter (λ), but it does not provide specific hyperparameter values or detailed system-level training settings. |