Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning

Authors: Yejia Liu, Wang Zhu, Shaolei Ren

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on four widely used vision benchmarks. Our results have shown that the GPS achieves higher accuracy compared to baselines, especially when we have a long task sequence. In addition, our empirical analysis verifies that the dynamic memory construction by using GPS is close to the offline solution.
Researcher Affiliation Academia Yejia Liu Wang Zhu Shaolei Ren University of California Riverside University of Southern California {yliu807, shaolei}@ucr.edu wangzhu@usc.edu
Pseudocode Yes The detailed binary search algorithm for sj can be found in our Appendix A. We also put the detailed algorithm for GPS in Appendix A.
Open Source Code Yes We release our code at https://github.com/liuyejia/gps_cl
Open Datasets Yes We carry out evaluations on four widely used vision benchmarks in continual learning, P-MNIST, S-CIFAR-10, S-CIFAR-100 and Tiny Image Net [43, 6, 25].
Dataset Splits No The paper mentions using specific datasets (e.g., S-CIFAR-10, S-CIFAR-100) and how they are split into sequential tasks, but it does not provide explicit train/validation/test dataset split percentages, absolute sample counts, or specific predefined split files for reproducibility within each task or overall.
Hardware Specification No The paper does not specify the type of GPUs, CPUs, or other hardware used for running the experiments. It only mentions the model architectures (e.g., 'Resnet18') and training parameters.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1'). It mentions general components like 'stochastic gradient descent (SGD)' but no detailed software environment for reproducibility.
Experiment Setup Yes Our training all use stochastic gradient descent (SGD) with a learning rate of 0.1. We use λ = 1 in the local updating method Eqn. (2). For P-MNIST, we train 5 epochs for each task while increasing the number of epochs to 50 for S-CIFAR-10, 100 for both S-CIFAR-100 and Tiny Image Net regarding their data complexity, as done by works [6, 43]. For P-MNIST and S-CIFAR-10, we set the batch size as 10. For S-CIAFR-100 and Tiny Image Net, batch size is set to 50.