Scalable Bayesian Rule Lists

Authors: Hongyu Yang, Cynthia Rudin, Margo Seltzer

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through a series of controlled experiments, we show that SBRL is over two orders of magnitude faster than the previous best code for this problem.
Researcher Affiliation Academia 1Massachusetts Institute of Technology, Cambridge, Massachusetts, USA 2Duke University, Durham, North Carolina, USA 3Harvard University, Cambridge, Massachusetts, USA.
Pseudocode Yes Algorithm 1 Calculating bj s
Open Source Code Yes Code for SBRL is available at the following link: https://github.com/Hongyuy/sbrlmod Link to R package SBRL on CRAN: https://cran.r-project.org/web/packages/sbrl/ index.html
Open Datasets Yes We benchmark using publicly available datasets (see Bache & Lichman, 2013)
Dataset Splits Yes Evaluations of prediction quality, sparsity, and timing were done using 10-fold cross validation.
Hardware Specification No The paper mentions that experiments were run 'on a laptop' but does not provide specific hardware details such as CPU/GPU models or memory specifications.
Software Dependencies No The paper mentions 'python implementation', 'python gmpy library', 'Python to C', and 'GMP library' but does not provide specific version numbers for any of these software components.
Experiment Setup Yes The prior parameters were fixed at η = 1, and α = (1, 1).