Online Class-Incremental Continual Learning with Adversarial Shapley Value
Authors: Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang9630-9638
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To test the efficacy of ASER and its variant ASERµ, we evaluate their performance by comparing them with several state-of-the-art CL baselines. We begin by reviewing the benchmark datasets, baselines we compared against and our experiment setting. We then report and analyze the result to validate our approach. |
| Researcher Affiliation | Collaboration | 1 University of Toronto, 2 LG AI Research |
| Pseudocode | Yes | Algorithm 1: Generic ER-based method |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for their methodology or a link to a code repository. |
| Open Datasets | Yes | Split CIFAR-10 splits the CIFAR-10 dataset (Krizhevsky 2009), Split CIFAR-100 is constructed by splitting the CIFAR-100 dataset (Krizhevsky 2009), Split mini Imagenet consists of splitting the mini Image Net dataset (Vinyals et al. 2016) |
| Dataset Splits | Yes | The detail of datasets, including the general information of each dataset, class composition and the number of sam-ples in training, validation and test sets of each task is presented in Appendix1 D. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using an 'SGD optimizer' but does not specify any software names with version numbers for dependencies (e.g., specific deep learning frameworks, Python versions, or libraries). |
| Experiment Setup | Yes | We use a reduced Res Net18, similar to (Chaudhry et al. 2019b; Lopez-Paz and Ranzato 2017), as the base model for all datasets, and the network is trained via crossentropy loss with SGD optimizer and mini-batch size of 10. The size of the mini-batch retrieved from memory is also set to 10 irrespective of the size of the memory. More details of the experiment can be found in Appendix1E. |