Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning
Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu, Ming-Ming Cheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on CIFAR100, Tiny Image Net and Image Net-Subset demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | Jiang-Tian Zhai 1 , Xialei Liu 1,*, Lu Yu 2, Ming-Ming Cheng 1 1 VCIP, CS, Nankai University 2 Tianjin University of Technology |
| Pseudocode | Yes | Algorithm 1: Pseudocode of Training Process |
| Open Source Code | Yes | Code is available at https://github.com/scok30/ vit-cil. |
| Open Datasets | Yes | We conduct experiments on three datasets: CIFAR100, Tiny Image Net, and Image Net-Subset, as commonly used in previous works. |
| Dataset Splits | Yes | For CIFAR100 and Image Net-Subset, we adopt three configurations: 50 + 5 10, 50 + 10 5, 40 + 20 3. For Tiny Image Net, the settings are: 100 + 5 20, 100 + 10 10, and 100 + 20 5. |
| Hardware Specification | No | The paper mentions "Computation is supported by the Supercomputing Center of Nankai University." but does not provide specific details such as GPU/CPU models, memory, or other hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | As for the structure of the vision transformer, we use 5 transformer blocks for the encoder and 1 for the decoder, which is much more lightweight than the original version of Vit-Base. All transformer blocks have an embedding dimension of 384 and 12 self-attention heads. We train each task for 400 epochs. After task t, we save one averaged prototype (class center) for each class. We set λpks and λpr to 10 in experiments. |