CPR: Classifier-Projection Regularization for Continual Learning
Authors: Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio Calmon, Taesup Moon
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our extensive experimental results, we apply CPR to several state-of-the-art regularization-based continual learning methods and benchmark performance on popular image recognition datasets. |
| Researcher Affiliation | Academia | Sungmin Cha1, Hsiang Hsu2, Taebaek Hwang1, Flavio P. Calmon2, and Taesup Moon3 1Sungkyunkwan University 2Harvard University 3Seoul National University csm9493@skku.edu, hsianghsu@g.harvard.edu, gxq9106@gmail.com, fcalmon@g.harvard.edu, tsmoon@snu.ac.kr |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes and scripts for this work are available at https://github.com/csm9493/CPR_CL. |
| Open Datasets | Yes | We select CIFAR-100, CIFAR-10/100 (Krizhevsky et al., 2009), Omniglot (Lake et al., 2015), and CUB200 (Welinder et al., 2010) as benchmark datasets. |
| Dataset Splits | No | The paper mentions using 'validation sets' for hyperparameter tuning ('we selected β using validation sets'), but it does not provide specific details on the dataset splits (e.g., percentages, sample counts, or explicit split files) within the main text. It defers 'Training details' to the Supplementary Material. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software tools and frameworks (e.g., 'PPO', 'Py Hessian') but does not specify their version numbers or other specific ancillary software details necessary for replication in the main text. It defers 'Training details' to the Supplementary Material. |
| Experiment Setup | No | The paper states that 'Training details, model architectures, hyperparameters tuning, and source codes are available in the Supplementary Material (SM),' indicating that specific experimental setup details like hyperparameter values are not provided in the main text. |