Optimizing Spca-based Continual Learning: A Theoretical Approach

Authors: Chunchun Yang, Malik Tiomoko, Zengfu Wang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm that the various theoretical conclusions are robust to a wide range of data distributions. Besides, several applications on synthetic and real data show that the proposed method while being computationally efficient, achieves comparable results with some state of the art.
Researcher Affiliation Collaboration Chunchun Yang University of Science and Technology of China Huawei Noah s Ark Lab yangchunchun4@huawei.com Malik Tiomoko Huawei Noah s Ark Lab Paris, France malik.tiomoko@huawei.com Zengfu Wang University of Science and Technology of China Hefei Institutes of Physical Science, Chinese Academy of Sciences, China zfwang@ustc.edu.cn
Pseudocode Yes Our algorithm (one-versusone) is summarized as Algorithm 1 in Appendix and the code is available online 1.
Open Source Code Yes Code is released at https://github.com/huawei-noah/noah-research/tree/master/OSCL and https://gitee.com/mindspore/models/tree/master/research/AI-foundation/OSCL
Open Datasets Yes Through the experimental part, we will use 5 data sets (Synthetic, Permuted MNIST denoted PMNIST, Split MNIST denoted SMNIST, Rotated MNIST denoted RMNIST, Split Fashion MNIST denoted as SFMNIST). See more details in the Appendix D.
Dataset Splits No The paper mentions training and testing data splits but does not explicitly specify a validation dataset split or a methodology like cross-validation for hyperparameter tuning.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions 'Scipy optimization toolbox' and details of a 'CNN model' but does not specify version numbers for these or other software libraries, which are necessary for full reproducibility.
Experiment Setup Yes Across all MNIST variants, 1000 samples were used as training data. For Rotated MNIST and Permuted MNIST, 10 tasks were generated. And regarding Split MNIST and Split Fashion MNIST, 5 tasks are generated... Each experiment was run five times randomly to obtain the results. For a fair comparison, given that we are using a linear model, we used HOG (Dalal & Triggs, 2005) to extract features for Rotated MNIST, Split MNIST, and Split Fashion MNIST, and raw data for Permuted MNIST (without any feature extraction).