Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Cognitively Inspired Learning of Incremental Drifting Concepts
Authors: Mohammad Rostami, Aram Galstyan
IJCAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method on two sequential task learning settings: incremental learning and continual incremental learning. |
| Researcher Affiliation | Academia | Mohammad Rostami , Aram Galstyan University of Southern California EMAIL |
| Pseudocode | Yes | Algorithm 1 ICLA (λ, γ, τ) |
| Open Source Code | Yes | Our implementation is available as a supplement. |
| Open Datasets | Yes | We design two incremental learning experiments using the MNIST and the Fashion-MNIST datasets. |
| Dataset Splits | No | The paper mentions using "standard testing split" but does not explicitly provide specific percentages or counts for training, validation, or test splits. While these datasets have standard splits, the paper does not state them or cite how they were specifically used for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | Each task is learned in 100 epochs and at each epoch, the model performance is computed as the average classification rate over all the classes, observed before. We use a memory buffer with the fixed size of 100 for MB. We build an autoencoder by expanding a VGG-based classifier by mirroring the layers. |