Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME
Authors: Farhad Shakerin, Gopal Gupta3052-3059
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics. |
| Researcher Affiliation | Academia | Farhad Shakerin, Gopal Gupta Computer Science Department, The University of Texas at Dallas, Richardson, USA {farhad.shakerin,gupta}utdallas.edu |
| Pseudocode | Yes | Algorithm 1 Summarizing the FOIL algorithm; Algorithm 2 Linear Model Generation by LIME; Algorithm 3 FOLD Algorithm; Algorithm 4 Dataset Transformation with LIME |
| Open Source Code | No | The paper mentions that 'ALEPH v.5 has been ported into SWI-Prolog by (Riguzzi 2016)' with a GitHub link, but this is for a third-party tool (ALEPH) and not for the authors' own LIME-FOLD methodology. There is no concrete access provided for the LIME-FOLD source code. |
| Open Datasets | Yes | In this section, we present our experiments on UCI standard benchmarks (Lichman 2013). The ALEPH system (Srinivasan 2001) is used as the baseline. Lichman, M. 2013. UCI,ml repository, http://archive.ics. uci.edu/ml. |
| Dataset Splits | Yes | First, we run ALEPH on 10 different datasets using 5-fold crossvalidation setting. Second, each dataset is transformed as explained in Algorithm 4. Then the LIME-FOLD algorithm is run on a 5-fold cross-validated setting, and the classification metrics are reported. |
| Hardware Specification | Yes | All experiments were run on an Intel Core i7 CPU @ 2.7GHz with 16 GB RAM and a 64-bit Windows 10. |
| Software Dependencies | Yes | The FOLD algorithm is a Java application that uses JPL library to connect to SWI prolog. ALEPH v.5 has been ported into SWI-Prolog by (Riguzzi 2016). |
| Experiment Setup | Yes | We set ALEPH to use the heuristic enumeration strategy, and the maximum number of branch nodes to be explored in a branch-and-bound search to 500K. In this research we conducted all experiments using the Extreme Gradient Boosting (XGBoost) algorithm (Chen and Guestrin 2016). |