Scalable Rule-Based Representation Learning for Interpretable Classification
Authors: Zhuo Wang, Wei Zhang, Ning Liu, Jianyong Wang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Exhaustive experiments on nine small and four large data sets show that RRL outperforms the competitive interpretable approaches and can be easily adjusted to obtain a trade-off between classification accuracy and model complexity for different scenarios. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Technology, Tsinghua University 2Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University 3School of Computer Science and Technology, East China Normal University 4School of Software, Shandong University |
| Pseudocode | No | The paper describes the proposed methods and their components, and includes diagrams, but does not present any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at: https://github.com/12wang3/rrl. |
| Open Datasets | Yes | We took nine small and four large public datasets to conduct our experiments, all of which are often used to test classification performance and model interpretability (Dua and Graff, 2017; Xiao et al., 2017; Anguita et al., 2013; Rozemberczki et al., 2019). |
| Dataset Splits | Yes | We adopt 5-fold cross-validation to evaluate the classification performance more fairly. |
| Hardware Specification | Yes | All models are trained on a workstation with Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz, NVIDIA GeForce RTX 2080 Ti and 64GB RAM |
| Software Dependencies | Yes | All models are implemented with PyTorch 1.9.0 (Paszke et al., 2019) and Adam optimizer (Kingma and Ba, 2014). |
| Experiment Setup | Yes | The learning rate of Adam is initialized to 1e-3, and is decayed by half every 10 epochs. The number of epochs for RRL is 500, and for other models is 100. |