Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Reinforced Continual Learning

Authors: Ju Xu, Zhanxing Zhu

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks.
Researcher Affiliation Academia Ju Xu Center for Data Science, Peking University Beijing, China EMAIL Zhanxing Zhu Center for Data Science, Peking University & Beijing Institute of Big Data Research (BIBDR) Beijing, China EMAIL
Pseudocode Yes Algorithm 1 RCL for Continual Learning
Open Source Code No The paper does not provide a statement about releasing open-source code or a link to a code repository.
Open Datasets Yes Datasets (1) MNIST Permutations [4]. ... (3) Incremental CIFAR-100 [9].
Dataset Splits No While the paper mentions the use of a 'validation dataset Vt' for reward calculation, it does not provide specific split sizes (e.g., percentages or sample counts) for this validation set, only for training and test sets.
Hardware Specification Yes We implemented all the experiments in Tensorfolw framework on GPU Tesla K80.
Software Dependencies No The paper mentions 'Tensorfolw framework' but does not specify a version number or other software dependencies with versions.
Experiment Setup No The paper mentions varying hyperparameters (e.g., α, learning rate η, number of epochs Te) but does not provide their specific values in the text for replication.