Learning Global Transparent Models consistent with Local Contrastive Explanations

Authors: Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now empirically validate our method. We first describe the setup, followed by a discussion of the experimental results.
Researcher Affiliation Industry Tejaswini Pedapati IBM Research tejaswinip@us.ibm.com Avinash Balakrishnan IBM Research avinash.bala@us.ibm.com Karthikeyan Shanmugan IBM Research karthikeyan.shanmugam2@ibm.com Amit Dhurandhar IBM Research adhuran@us.ibm.com
Pseudocode Yes Algorithm 1 Global Boolean Feature Learning (GBFL)., Algorithm 2 KDE based Grid Point Generation (GPG)., Algorithm 3 Model Generation using Local Explanations.
Open Source Code No The paper does not provide concrete access to source code for the methodology, nor does it explicitly state that the code is available or will be released.
Open Datasets Yes We experimented on six publicly available datasets from Kaggle and UCI repository namely; Sky Survey, Credit Card, Magic, Diabetes, Waveform and WDBC.
Dataset Splits Yes Statistically significant results that measure performance based on paired t-test are reported, which are computed over 5 randomizations with 75/25% train/test split. 10-fold cross-validation was used to find all parameters including tree heights ( ≤ 5) for DT.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper mentions software components like 'Decision trees (DTs)' and 'CART algorithm' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Decision trees (DTs) with height 5 were the transparent learner based on the CART algorithm. ... 10-fold cross-validation was used to find all parameters including tree heights ( ≤ 5) for DT.