A Boosting Framework on Grounds of Online Learning
Authors: Tofigh Naghibi Mohamadpoor, Beat Pfister
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We present a boosting framework which proves to be extremely powerful thanks to employing the vast knowledge available in the online learning area. Using this framework, we develop various algorithms to address multiple practically and theoretically interesting questions including sparse boosting, smooth-distribution boosting, agnostic learning and, as a by-product, some generalization to double-projection online learning algorithms. |
| Researcher Affiliation | Academia | Tofigh Naghibi, Beat Pfister Computer Engineering and Networks Laboratory ETH Zurich, Switzerland naghibi@tik.ee.ethz.ch, pfister@tik.ee.ethz.ch |
| Pseudocode | Yes | Algorithm 1: Mirror Ascent Boosting (MABoost); Algorithm 2: Sparse Boost; Algorithm 3: Variant of Mada Boost |
| Open Source Code | No | The paper does not provide any statements or links regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper refers to 'N training samples' and 'primary dataset A and a secondary dataset B' in a theoretical context but does not mention specific, publicly available datasets or provide access information for any dataset used. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) needed to reproduce data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers). |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters, training configurations, or system-level settings. |