FearNet: Brain-Inspired Model for Incremental Learning

Authors: Ronald Kemker, Christopher Kanan

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Fear Net achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (Audio Set) benchmarks. EXPERIMENTAL SETUP. EXPERIMENTAL RESULTS. Tables 1, 2, 3, 4, 5, 6 and Figures 2, 4, 5, S1, S2 are provided with empirical results and comparisons.
Researcher Affiliation Academia Ronald Kemker and Christopher Kanan Carlson Center for Imaging Science Rochester Institute of Technology Rochester, NY 14623, USA {rmk6217,kanan}@rit.edu
Pseudocode Yes Pseudocode for Fear Net s training and prediction algorithms are given in Algorithms 1 and 2 respectively. A.4 FEARNET ALGORITHM.
Open Source Code No The paper states that 'Fear Net was implemented in Tensorflow' but does not provide a specific repository link or an explicit statement about releasing the source code for their Fear Net model or its methodology.
Open Datasets Yes We evaluate all of the models on three benchmark datasets (Table 1): CIFAR-100, CUB-200, and Audio Set. CIFAR-100 is a popular image classification dataset... CUB-200 is a fine-grained image classification dataset... (Welinder et al., 2010). Audio Set is an audio classification dataset (Gemmeke et al., 2017).
Dataset Splits No The paper specifies 'Train Samples' and 'Test Samples' in Table 1 for the datasets used (CIFAR-100, CUB-200, Audio Set) but does not provide explicit details about a separate validation dataset split (e.g., percentages, counts, or specific predefined validation sets).
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or cloud computing instance types.
Software Dependencies No The paper states 'Fear Net was implemented in Tensorflow' and mentions using 'NAdam' for training. However, it does not specify version numbers for TensorFlow or any other software libraries or dependencies, which is required for reproducibility.
Experiment Setup Yes A.1 MODEL HYPERPARAMETERS. Table S1 shows the training parameters for the Fear Net model for each dataset. Hyperparameter Values: Learning Rate 2e-3, Mini-Batch Size 450 (Audio Set & CIFAR-100) 200 (CUB-200), m PFC Base-Knowledge Epochs 1,000, Memory Consolidation Epochs 60, BLA Training Epochs 20, Hidden Layer Size, Sleep Frequency 10, Dropout Rate 0.25, Unsupervised Loss Weights (λ) 104, 1.0, 0.1, Hidden Layer Activation Exponential Linear Units.