On Boosting Sparse Parities
Authors: Lev Reyzin
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental tests show the proposed weak learners to be competitive with the most widely used ones: decision stumps and pruned decision trees. |
| Researcher Affiliation | Academia | Lev Reyzin Department of Mathematics, Statistics, & Computer Science University of Illinois at Chicago Chicago, IL 60607 lreyzin@math.uic.edu |
| Pseudocode | Yes | Algorithm 1 Ada Boost (Freund and Schapire 1997); Algorithm 2 Approach of Grigorescu et al. (2011) for learning d-parities (d, n, ϵ, δ, η) |
| Open Source Code | No | The paper does not contain any explicit statement about making its source code available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We considered the following datasets: census, splice, ocr17, and ocr49, breast cancer, heart, and ecoli, all available from the UCI repository. |
| Dataset Splits | Yes | 1000 training examples were used unless the data set was too small to accommodate such a large training set, in which case we used fewer examples for training. Each run of the experiment randomly divided the data into training and test sets, and the errors are averages over 20 runs. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments, such as CPU or GPU models, or memory specifications. It only mentions 'computing resources' in the acknowledgements. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation or experimentation. |
| Experiment Setup | Yes | Table 2: Error rates of decision stumps, 2-parities, and 3-parities used as weak learners for Ada Boost run for 250 rounds, averaged over 20 trials. |