Transparent Classification with Multilayer Logical Perceptrons and Random Binarization
Authors: Zhuo Wang, Wei Zhang, Ning LIU, Jianyong Wang6331-6339
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 12 public data sets show that CRS outperforms the state-of-the-art approaches and the complexity of the learned CRS is close to the simple decision tree. |
| Researcher Affiliation | Academia | Zhuo Wang,1 Wei Zhang,2 Ning Liu,1 Jianyong Wang1 1Department of Computer Science and Technology, Tsinghua University 2School of Computer Science and Technology, Shanghai Key Laboratory of Trustworthy Computing, East China Normal University |
| Pseudocode | No | The paper describes algorithms and methods but does not provide a formally labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described. |
| Open Datasets | Yes | We took 12 datasets from the UCI machine learning repository (Dua and Graff 2017), all of which are often used to test classification performance and model transparency (Letham et al. 2015; Wang et al. 2017; Yang, Rudin, and Seltzer 2017; H uhn and H ullermeier 2009). |
| Dataset Splits | Yes | To evaluate the classification performance of our model and baselines more fairly, 5-fold cross-validation is adopted to have a lower bias on experimental results. Additionally, 80% of the training set is used for training and 20% for validation when parameters tuning is required. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | We set the number of logical layers in CRS, i.e., 2L, to 4. The number of nodes in each middle layer ranges from 32 to 256 depending on the number of binary features of the data set. The batch size is set to 128, and is trained for 400 epochs. We initialize the learning rate to 5 10 3 and decay it by a factor of 0.75 every 100 epochs. The weight decay is set to 10 8. When using the RB method, we change the selected subset of weights after every epoch and tune the rate of binarization P using validation sets. |