Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning

Authors: Chunwei Ma, Zhanghexuan Ji, Ziyun Huang, Yan Shen, Mingchen Gao, Jinhui Xu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Putting everything together, i Voro achieves up to 25.26%, 37.09%, and 33.21% improvements on CIFAR-100, Tiny Image Net, and Image Net-Subset, respectively, compared to the state-of-the-art non-exemplar CIL approaches. In conclusion, i Voro enables highly accurate, privacy-preserving, and geometrically interpretable CIL that is particularly useful when cross-phase data sharing is forbidden, e.g. in medical applications.
Researcher Affiliation Academia Chunwei Ma1, Zhanghexuan Ji1, Ziyun Huang2, Yan Shen1, Mingchen Gao1, Jinhui Xu1 1Department of Computer Science and Engineering, University at Buffalo 2Computer Science and Software Engineering, Penn State Erie 1{chunweim,zhanghex,yshen22,mgao8,jinhui}@buffalo.edu 2{zxh201}@psu.edu
Pseudocode Yes Algorithm 1: Voronoi Diagram-based Logistic Regression. Algorithm 2: i Voro Algorithm. Algorithm 3: i Voro-D Algorithm.
Open Source Code Yes Our code is available at https://machunwei.github.io/ivoro/.
Open Datasets Yes Three standard datasets, CIFAR-100 (Krizhevsky et al., 2009), Tiny Image Net (Le & Yang, 2015) and Image Net-Subset (Deng et al., 2009a) for CIL are used for method evaluation.
Dataset Splits Yes We follow the popular benchmarking protocol in exemplar-free CIL used by (Liu et al., 2021b; Zhu et al., 2021; Douillard et al., 2020; Hou et al., 2019) in which the inital phase contains a half of the classes while the subsequent phases each has 1 5, 1 10, or 1 20 of the remaining classes.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU/CPU models, memory details).
Software Dependencies No The paper does not explicitly list software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes Specifically, for each phase τ {1, ..., t}, the local dataset Dτ is used to train a logistic regression model (restricted by Thm. 2.1) with weight decay β at 0.0001 and initial learning rate at 0.001.