Model Agnostic Multilevel Explanations

Authors: Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that we produce high fidelity sparse explanations on several public datasets and also validate the effectiveness of the proposed technique based on two human studies one with experts and the other with non-expert users on real world datasets.
Researcher Affiliation Industry IBM Research, Yorktown Heights, NY USA 10598 knatesa@us.ibm.com, bhanu.vinzamuri@ibm.com, {zhangyun, adhuran}@us.ibm.com
Pseudocode Yes Algorithm 1 Model Agnostic Multilevel Explanation (MAME) method
Open Source Code No The paper provides a link in a footnote: 'We generated the data using code in https://github.com/IBM/AIX360/blob/master/aix360/data/ ted_data/Generate Data.py'. This link is to a script for generating data, not to the source code for the proposed MAME methodology itself. The paper does not contain an explicit statement or link for the MAME code.
Open Datasets Yes Our demonstration includes: Auto MPG [29], Retention1 [30], Home Line Equity Line of Credit (HELOC) [6], Waveform [29], and Airline Travel Information System (ATIS) datasets.
Dataset Splits Yes We perform 5 fold cross validation for all datasets except ATIS (which comes with its own train-test partition) and report mean performances.
Hardware Specification No The paper mentions that experiments 'complete running in about 10 minutes or less, in a single core' and provides runtimes for larger datasets. While 'single core' implies a CPU environment, it does not provide specific details such as CPU models (e.g., Intel Core i7, Xeon), GPU models, or memory configurations.
Software Dependencies No The paper mentions software like 'scikit-learn MLPClassifier' (Section 4.4) and algorithms like 'LASSO homotopy method [31]' (Section 4.2), but it does not specify version numbers for these software components or libraries, which is necessary for reproducibility.
Experiment Setup Yes When running LIME and MAME, the neighborhood size |Ni| in (1) is set to 10, neighborhood weights ψ(xi, z) are set using a Gaussian kernel on xi z 2 2 with a automatically tuned bandwidth, and the αi values in (1) are set to provide explanations with 5 non-zero values when β = 0.