Cross-Class Feature Augmentation for Class Incremental Learning
Authors: Taehoon Kim, Jaeyoo Park, Bohyung Han
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the standard benchmarks show that our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios, especially under an environment with an extremely limited memory budget. |
| Researcher Affiliation | Academia | Taehoon Kim1, Jaeyoo Park1, Bohyung Han1,2 1Department of Electrical and Computer Engineering, Seoul National University 2Interdisciplinary Program in Artificial Intelligence, Seoul National University {kthone, bellos1203, bhhan}@snu.ac.kr |
| Pseudocode | No | The paper describes the steps of the method textually and with equations, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper provides links to repositories for baseline models (PODNet, AANet, AFC) but does not provide a link or explicit statement about the availability of the authors' own source code for CCFA. |
| Open Datasets | Yes | We evaluate the proposed method on two datasets for class incremental learning, CIFAR-100 (Krizhevsky, Nair, and Hinton 2009), and Image Net-100/1000 (Russakovsky et al. 2015). |
| Dataset Splits | No | The paper describes how classes are split across incremental stages and mentions training on Dk and evaluating on test data, but it does not provide specific train/validation/test split percentages or counts for the datasets. |
| Hardware Specification | Yes | On a single NVIDIA Titan Xp GPU, training with K = 1 runs at 5.15 iteration/sec, which is 10 times faster than the case with K = 3, when the Res Net-32 backbone is employed with batch size of 128. and On a single NVIDIA RTX GPU, PODNet with CCFA requires 1.53 seconds per iteration while PODNet requires 1.50s with batch size 128 under the Res Net-18 backbone. |
| Software Dependencies | No | The paper mentions backbone network architectures (ResNet-32, ResNet-18) but does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | The size of the memory buffer is set to 20 per class as a default for all experiments unless specified otherwise. For the feature augmentation, we set the number of iterations for adversarial attack to 10 for all experiments. ... In the CIFAR-100 experiments, we randomly sample the attack step size α from a uniform distribution U( 2/255, 5/255) and generate 640 features... For Image Net, the attack step size α is randomly sampled from the uniform distribution U( 2/2040, 5/2040) and 128 features... |