Learning Accurate and Interpretable Decision Rule Sets from Neural Networks
Authors: Litao Qiao, Weijia Wang, Bill Lin4303-4311
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that our method can generate more accurate decision rule sets than other state-of-the-art rule-learning algorithms with better accuracy-simplicity trade-offs. The numerical experiments were evaluated on 4 publicly available binary classification datasets. |
| Researcher Affiliation | Academia | Litao Qiao , Weijia Wang , Bill Lin Electrical and Computer Engineering, University of California San Diego l1qiao@eng.ucsd.edu, wweijia@eng.ucsd.edu, billlin@eng.ucsd.edu |
| Pseudocode | No | The paper describes the mathematical formulations and processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The first two selected datasets are from UCI Machine Learning Repository (Dua and Graff 2017): MAGIC gamma telescope (magic) and adult census (adult)... |
| Dataset Splits | Yes | 5-fold nested cross validation was employed to select the parameters for all rule learners that explicitly trade-off between accuracy and interpretability to maximize the training set accuracies. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using "scikit-learn" and refers to frameworks like "TensorFlow" and "PyTorch" indirectly through citations, but it does not specify version numbers for any of these software dependencies. |
| Experiment Setup | Yes | For DR-Net, we used the Adam optimizer with a fixed learning rate of 10 2 and no weight decay across all experiments. There are 50 neurons in the Rules layer... The alternating two-phase training strategy... is employed with 10,000 total number of training epochs and 1,000 epochs for each layer. For simplicity, the batch size is fixed at 2,000 and the weights are uniformly initialized within the range between 0 and 1. |