Gaining Free or Low-Cost Interpretability with Interpretable Partial Substitute

Authors: Tong Wang

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experiments, We perform a detailed experimental evaluation of Hy RS using public datasets and a real-world application.
Researcher Affiliation Academia 1Department of Business Analytics, University of Iowa, Iowa, USA. Correspondence to: Tong Wang <tong-wang@uiowa.edu>.
Pseudocode Yes Algorithm 1 Stochastic Local Search algorithm
Open Source Code Yes Code for Hy RS is available at https://github.com/wangtongada/Hy RS
Open Datasets Yes We use four structured datasets and a text dataset from domains where interpretability is highly desired, including healthcare, judiciaries and customer analysis. 1) juvenile(Osofsky, 1995)... 2) credit card... (Yeh & Lien, 2009) 3) recidivism... 4) readmission... 5) Yelp review (Kotzias et al., 2015)... and Lichman, M. Uci machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
Dataset Splits Yes We partition each dataset into 80% training and 20% testing. We do cross-validation for parameter tuning on the training set and evaluate the best model on the test set.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models, memory, or specific cloud instances.
Software Dependencies No The paper mentions software components like Random Forests, Ada Boost, XGBoost, and LSTM, but does not provide specific version numbers for these libraries or the underlying programming environment.
Experiment Setup Yes We train the network for 200 epochs and θ1 controls the number of rules and is chosen from [0.001, 0.01]. θ2 controls transparency and we choose θ2 from 0 to 1.