Reinforced Continual Learning
Authors: Ju Xu, Zhanxing Zhu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks. |
| Researcher Affiliation | Academia | Ju Xu Center for Data Science, Peking University Beijing, China xuju@pku.edu.cn Zhanxing Zhu Center for Data Science, Peking University & Beijing Institute of Big Data Research (BIBDR) Beijing, China zhanxing.zhu@pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 RCL for Continual Learning |
| Open Source Code | No | The paper does not provide a statement about releasing open-source code or a link to a code repository. |
| Open Datasets | Yes | Datasets (1) MNIST Permutations [4]. ... (3) Incremental CIFAR-100 [9]. |
| Dataset Splits | No | While the paper mentions the use of a 'validation dataset Vt' for reward calculation, it does not provide specific split sizes (e.g., percentages or sample counts) for this validation set, only for training and test sets. |
| Hardware Specification | Yes | We implemented all the experiments in Tensorfolw framework on GPU Tesla K80. |
| Software Dependencies | No | The paper mentions 'Tensorfolw framework' but does not specify a version number or other software dependencies with versions. |
| Experiment Setup | No | The paper mentions varying hyperparameters (e.g., α, learning rate η, number of epochs Te) but does not provide their specific values in the text for replication. |