Cognitively Inspired Learning of Incremental Drifting Concepts
Authors: Mohammad Rostami, Aram Galstyan
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method on two sequential task learning settings: incremental learning and continual incremental learning. |
| Researcher Affiliation | Academia | Mohammad Rostami , Aram Galstyan University of Southern California {mrostami,galstyan}@isi.edu |
| Pseudocode | Yes | Algorithm 1 ICLA (λ, γ, τ) |
| Open Source Code | Yes | Our implementation is available as a supplement. |
| Open Datasets | Yes | We design two incremental learning experiments using the MNIST and the Fashion-MNIST datasets. |
| Dataset Splits | No | The paper mentions using "standard testing split" but does not explicitly provide specific percentages or counts for training, validation, or test splits. While these datasets have standard splits, the paper does not state them or cite how they were specifically used for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | Each task is learned in 100 epochs and at each epoch, the model performance is computed as the average classification rate over all the classes, observed before. We use a memory buffer with the fixed size of 100 for MB. We build an autoencoder by expanding a VGG-based classifier by mirroring the layers. |