Lifelong Learning with Weighted Majority Votes
Authors: Anastasia Pentina, Ruth Urner
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a lifelong learning algorithm with error guarantees for every observed task (rather than on average). We show sample complexity reductions in comparison to solving every task in isolation in terms of our task complexity measure. Further, our algorithmic framework can naturally be viewed as learning a representation from encountered tasks with a neural network. |
| Researcher Affiliation | Academia | Anastasia Pentina IST Austria apentina@ist.ac.at; Ruth Urner Max Planck Institute for Intelligent Systems rurner@tuebingen.mpg.de |
| Pseudocode | Yes | Algorithm 1 Lifelong learning of majority votes... Algorithm 2 Lifelong learning of majority votes with unkown horizon |
| Open Source Code | No | The paper does not contain any statement about making its source code available, nor does it provide any links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not describe using specific publicly available datasets for empirical training. It refers to 'training set S1 from D1, h1' as abstract components within its theoretical algorithms. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical validation experiments with specific dataset splits (training, validation, test). |
| Hardware Specification | No | The paper is theoretical and does not describe any specific hardware used for computational experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies or versions required for replication. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings for empirical runs. The parameters ϵ and δ are theoretical accuracy and confidence parameters for bounds, not experimental hyperparameters. |