A Learning Theoretic Perspective on Local Explainability
Authors: Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate our theoretical results empirically and show that they reflect what can be seen in practice. We verify empirically on UCI Regression datasets that our results non-trivially reflect the two types of generalization in practice. |
| Researcher Affiliation | Collaboration | Jeffrey Li University of Washington jwl2162@cs.washington.edu Vaishnavh Nagarajan , Gregory Plumb Carnegie Mellon University vaishnavh@cs.cmu.edu Ameet Talwalkar Carnegie Mellon University & Determined AI |
| Pseudocode | No | The paper describes algorithmic procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | For both experiments, we use several regression datasets from the UCI collection (Dua & Graff, 2017) |
| Dataset Splits | Yes | Specifically, we split the original test data into two halves, using only the first half for explanation training and the second for explanation testing. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using neural networks and linear models but does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper states that neural networks were trained 'with the same setup as in (Plumb et al., 2020)' and mentions using 'linear models' and 'empirical MNF minimizer', but does not provide specific hyperparameter values or detailed training configurations within the main text. |