Online Boosting Algorithms for Anytime Transfer and Multitask Learning
Authors: Boyu Wang, Joelle Pineau
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results show state-of-the-art empirical performance on standard benchmarks, and we present results of using our methods for effectively detecting new seizures in patients with epilepsy from very few previous samples. |
| Researcher Affiliation | Academia | Boyu Wang and Joelle Pineau School of Computer Science Mc Gill University, Montreal, Canada boyu.wang@mail.mcgill.ca, jpineau@cs.mcgill.ca |
| Pseudocode | Yes | Algorithm 1 Tr Ada Boost Algorithm (Dai et al. 2007); Algorithm 2 Online Transfer Boosting; Algorithm 3 Online Multitask Boosting |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is openly available, nor does it provide a direct link to a code repository for the algorithms described. |
| Open Datasets | Yes | We evaluate OTB algorithm on the 20 newsgroups data set; We evaluate the OMB algorithm on the landmine dataset; The dataset consists of patients suffering from medically intractable focal epilepsy at the Epilepsy Center of the University Hospital of Freiburg, in Germany (Freiburg University 2012). |
| Dataset Splits | Yes | In all experiments, we vary the proportion of training data, and use the rest of data as test data. The results have been averaged over 10 runs for random permutations of training data and test data. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'The naive Bayes classifier is used as the base learner for boosting,' but it does not specify any version numbers for this or any other software dependencies. |
| Experiment Setup | No | The paper describes general experimental settings like using naive Bayes as a base learner and averaging over 10 runs, but it does not provide specific details on hyperparameters, optimizer settings, or other concrete training configurations. |