Variational Continual Learning

Authors: Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.
Researcher Affiliation Academia Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner Department of Engineering, University of Cambridge {vcn22,yl494,tdb40,ret26}@cam.ac.uk
Pseudocode Yes Algorithm 1 Coreset VCL
Open Source Code Yes An implementation of the methods proposed in this paper can be found at: https://github.com/ nvcuong/variational-continual-learning.
Open Datasets Yes Permuted MNIST: This is a popular continual learning benchmark (Goodfellow et al., 2014a; Kirkpatrick et al., 2017; Zenke et al., 2017).
Dataset Splits No The paper mentions training and testing but does not explicitly provide the specific training, validation, and test dataset splits with percentages or counts.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify software dependencies with version numbers (e.g., specific versions of Python, TensorFlow, PyTorch, or other libraries).
Experiment Setup Yes For all algorithms, we use fully connected single-head networks with two hidden layers, where each layer contains 100 hidden units with Re LU activations.