Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference

Authors: Yabin Wang, Zhiheng Ma, Zhiwu Huang, Yaowei Wang, Zhou Su, Xiaopeng Hong

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method on four large datasets. Extensive results demonstrate the superiority of the proposed method in setting up new state-of-the-art overall performance.
Researcher Affiliation Academia 1 School of Cyber Science and Engineering, Xi an Jiaotong University 2 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences 3 Singapore Management University 4 University of Southampton 5 Peng Cheng Laboratory 6 Harbin Institute of Technology
Pseudocode Yes Algorithm 1: Model Training and Algorithm 2: Inference are included in the paper.
Open Source Code Yes Code is available at https://github.com/iamwangyabin/ESN.
Open Datasets Yes We evaluate the proposed method on four large datasets. Split-Domain Net: We build the cross-domain class-incremental learning benchmark, Split-Domain Net, based on Domain Net (Peng et al. 2019). Split-CIFAR100 (Wang et al. 2022c) splits the origin CIFAR-100 (Krizhevsky and Hinton 2009). 5-Datasets (Ebrahimi et al. 2020) provides a benchmark for class incremental learning. CORe50 (Lomonaco and Maltoni 2017) is a large benchmark dataset for continual object recognition.
Dataset Splits No The paper does not explicitly state specific training/validation/test splits with percentages or counts for most datasets. For CORe50, it mentions a test set but no validation split: "Three domains (3, 7, and 10) are selected as test set, and the remaining 8 domains are used for incremental learning."
Hardware Specification Yes We implement our method in PyTorch with two NVIDIA RTX 3090 GPUs.
Software Dependencies No We implement our method in PyTorch. The version number for PyTorch or any other software dependency is not specified.
Experiment Setup Yes We use the SGD optimizer and the cosine annealing learning rate scheduler with a initial learning rate of 0.01 all benchmarks. We use 30 epochs for Split-CIFAR100 and Split-Domain Net, 10 epochs for 5-Datasets and CORe50. We set the batch size of 128 for all experiments. Momentum and weight decay parameters are set to 0.9 and 0.0005, respectively. The candidate temperature set Ψ is from a range of numbers from 0.001 to 1.0 with step of 0.001. We set the energy anchor = 10 and balance hyper-parameter λ = 0.1 for all benchmarks.