Interactive Martingale Boosting
Authors: Ashish Kulkarni, Pushpak Burange, Ganesh Ramakrishnan
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that while arbitrary preferences might be difficult to meet for a single classifier, a non-linear ensemble of classifiers as the one constructed by martingale boosting, performs better. |
| Researcher Affiliation | Academia | Ashish Kulkarni, Pushpak Burange, and Ganesh Ramakrishnan Department of Computer Science and Engineering Indian Institute of Technology Bombay Mumbai-400076, India {kulashish, pushpakburange}@gmail.com, ganesh@cse.iitb.ac.in |
| Pseudocode | No | The paper describes procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code available at https://github.com/kulashish/adaptivemb |
| Open Datasets | Yes | We evaluate the effectiveness of interactive martingale boosting by performing experiments for the problem of binary classification on several UCI datasets including Spambase, Sonar, Ionosphere, and Liver, and for multiclass classification on Splice and Iris datasets. |
| Dataset Splits | Yes | Table 1 reports the average test accuracy across five splits (60% train and 40% test) of the dataset. ... Models were tuned using 3-fold cross-validation within each train split and the best model was chosen. |
| Hardware Specification | Yes | Evaluation was done on an Intel i7 machine with 8GB RAM and a 64-bit OS. |
| Software Dependencies | No | The paper mentions 'multinomial logistic regression' and 'BFGS update' as methods and the operating system ('64-bit OS'), but does not provide specific version numbers for any software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | The number of levels, L, of MB was empirically set to 15. |