Class-Incremental Learning via Dual Augmentation
Authors: Fei Zhu, Zhen Cheng, Xu-yao Zhang, Cheng-lin Liu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform our experiments on CIFAR-100 [49] and Tiny-Image Net [56]. Comparative results are shown in Figure 5. |
| Researcher Affiliation | Academia | Fei Zhu1,2, Zhen Cheng1,2, Xu-Yao Zhang1,2 , Cheng-Lin Liu1,2,3 1NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China 2University of Chinese Academy of Sciences, Beijing, 100049, China 3Center for Excellence of Brain Science and Intelligence Technology, CAS |
| Pseudocode | Yes | Algorithm 1 presents the pseudo code of IL2A. |
| Open Source Code | Yes | Our code is available at https://github.com/Impression2805/IL2A. |
| Open Datasets | Yes | We perform our experiments on CIFAR-100 [49] and Tiny-Image Net [56]. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits (percentages, counts, or references to predefined splits with specific information) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Adam [57] optimizer, but does not provide specific version numbers for software dependencies or libraries (e.g., PyTorch, TensorFlow, or specific Python library versions). |
| Experiment Setup | Yes | All models are trained using Adam [57] optimizer with an initial learning rate of 0.001 for 100 epochs with the mini-batch size of 64. The learning rate is reduced by a factor of 10 at 45 and 90 epochs. We use the same hyper-parameter value for all experiments. Specifically, we set α = 10 and β = 10 in Eq. (8). |