Towards Robust Interpretability with Self-Explaining Neural Networks
Authors: David Alvarez Melis, Tommi Jaakkola
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results across various benchmark datasets show that our framework offers a promising direction for reconciling model complexity and interpretability. We carry out quantitative evaluation on three classification settings: (i) MNIST digit recognition, (ii) benchmark UCI datasets [13] and (iii) Propublica s COMPAS Recidivism Risk Score datasets. |
| Researcher Affiliation | Academia | David Alvarez-Melis CSAIL, MIT dalvmel@mit.edu Tommi S. Jaakkola CSAIL, MIT tommi@csail.mit.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a link to open-source code for the described methodology or explicitly state its release. The only GitHub link mentioned is for the COMPAS dataset, not the authors' code. |
| Open Datasets | Yes | We carry out quantitative evaluation on three classification settings: (i) MNIST digit recognition, (ii) benchmark UCI datasets [13] and (iii) Propublica s COMPAS Recidivism Risk Score datasets.1 github.com/propublica/compas-analysis/ |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits for their experiments). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or cloud computing specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | No | The paper discusses aspects of the model design like the regularization parameter λ, but it does not provide specific experimental setup details such as learning rates, batch sizes, optimizers, or number of training epochs. |