ACIL: Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection
Authors: HUIPING ZHUANG, Zhenyu Weng, Hongxin Wei, RENCHUNZI XIE, Kar-Ann Toh, Zhiping Lin
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed ACIL on CIFAR-100, Image Net-Subset and Image Net-Full datasets which are benchmark datasets for CIL. We compare the ACIL with several state-of-the-art CIL techniques... We tabulate the average incremental accuracy A and the forgetting rate F from the compared methods in Table 1. |
| Researcher Affiliation | Academia | Huiping Zhuang1, Zhenyu Weng2 , Hongxin Wei3, Renchunzi Xie3, Kar-Ann Toh4, Zhiping Lin2 1Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, China 2School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 3School of Computer Science and Engineering, Nanyang Technological University, Singapore 4Department of Electrical and Electronic Engineering, Yonsei University, Korea |
| Pseudocode | Yes | Algorithm 1 ACIL |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] Will release the code and instructions shortly. |
| Open Datasets | Yes | We evaluate the proposed ACIL on CIFAR-100, Image Net-Subset and Image Net-Full datasets which are benchmark datasets for CIL. |
| Dataset Splits | No | CIFAR-100 contains 100 classes of 32 32 color images with each class having 500 and 100 images for training and testing respectively. Image Net-Full has 1000 classes, and 1.3 million images for training with 50,000 images for testing. |
| Hardware Specification | Yes | The results for the ACIL are measured by the average of 3 runs on an RTX 2080Ti GPU workstation. |
| Software Dependencies | No | The paper mentions software components like SGD, Re LU, Res Net-32, and Res Net-18, but does not provide specific version numbers for any software packages, libraries, or programming languages used. |
| Experiment Setup | Yes | For conventional BP training in the base training agenda, we train the network using SGD for 160 (90) epochs for Res Net-32 (Res Net-18). The learning rate starts at 0.1 and it is divided by 10 at epoch 80 (30) and 120 (60). We adopt a momentum of 0.9 and weight decay of 5 10 4 (1 10 4) with a batch size of 128. |