Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay

Authors: Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate theoretically and empirically that our framework learns a distribution in the embedding, which is shared across all tasks, and as a result tackles catastrophic forgetting. ... We demonstrate the effectiveness of our approach theoretically and empirically validate our method on benchmark tasks that have been used in the literature. ... 6 Experimental Validation We validate our method on learning two sets of sequential tasks: permuted MNIST tasks and related digit classification tasks.
Researcher Affiliation Collaboration Mohammad Rostami1 , Soheil Kolouri2 and Praveen K. Pilly2 1University of Pennsylvania 2HRL Laboratories, LLC
Pseudocode Yes Algorithm 1 CLEER (L, λ)
Open Source Code Yes Our implementation code is available on Git Hub.
Open Datasets Yes We use permuted MNIST tasks to validate our framework. ... We consider two digit classification datasets for this purpose: MNIST (M) and USPS (U) datasets.
Dataset Splits No The paper mentions 'training dataset' and 'testing split' but does not specify a separate validation dataset or its split details.
Hardware Specification No The paper does not specify any hardware details like GPU/CPU models, memory, or cloud instance types used for experiments.
Software Dependencies No The paper mentions 'Py Torch implementation of EWC [Hataya, 2019]' for comparison, but does not provide specific version numbers for its own software dependencies like Python, PyTorch, or other libraries.
Experiment Setup No The paper states models were 'trained via standard stochastic gradient descent' but does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or other detailed training configurations.