Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Authors: Joao Marques-Silva, Thomas Gerspacher, Martin Cooper, Alexey Ignatiev, Nina Narodytska
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations. |
| Researcher Affiliation | Collaboration | Joao Marques-Silva1, Thomas Gerspacher2, Martin C. Cooper1 1IRIT, CNRS, University of Toulouse III, France 2ANITI, University of Toulouse, France {joao.marques-silva,thomas.gerspacher,cooper}@irit.fr Alexey Ignatiev Monash University, Australia alexey.ignatiev@monash.edu Nina Narodytska VMware Research, CA, USA nnarodytska@vmware.com |
| Pseudocode | Yes | Algorithm 1: Finding one explanation; Algorithm 2: Finding all explanations; Algorithm 3: Entering a valid state |
| Open Source Code | Yes | The source code of XPXLC as well as the datasets, a demo and accompanying documentation are available at https://github.com/jpmarquessilva/expxlc. |
| Open Datasets | Yes | We selected a set of widely-used, publicly available, datasets from [37, 28, 13]. The total number of datasets used is 37. (All the datasets and the trained classifiers are available in the online repository.) |
| Dataset Splits | No | For each dataset, we trained a Naive Bayes classifier13 using 80% of the training data. The average test accuracy assessed for the 20% remaining instances is 77.7%. No explicit mention of a validation split. |
| Hardware Specification | Yes | XPXLC was tested in Debian Linux on an Intel Xeon CPU 5160 3.00 GHz with 64 GByte of memory. |
| Software Dependencies | No | The paper mentions 'Debian Linux' as the operating system and 'scikit-learn [33]' for the Naive Bayes classifier, but does not specify exact version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper describes using a Naive Bayes classifier trained on 80% of the data, but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, optimizer settings) or model initialization. |