Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Less-Forgetful Learning for Domain Expansion in Deep Neural Networks

Authors: Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim

AAAI 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we prove the effectiveness of our method through experiments on image classification tasks. All datasets used in the paper, will be released on our website for someone s follow-up study.
Researcher Affiliation Academia Heechul Jung School of EE KAIST EMAIL Jeongwoo Ju Division of Future Vehicle KAIST EMAIL Minju Jung School of EE KAIST EMAIL Junmo Kim School of EE KAIST EMAIL
Pseudocode Yes Algorithm 1 Less-forgetful (LF) learning
Open Source Code No The paper states, "All datasets used in the paper, will be released on our website for someone s follow-up study," but does not provide concrete access to the source code for the described methodology.
Open Datasets Yes We conducted two different experiments for image classification: one using datasets consisting of tiny images (CIFAR10 (Krizhevsky and Hinton 2009), MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011)) and one using a dataset made up of large images (Image Net (Russakovsky et al. 2015)).
Dataset Splits No The paper states 'Each dataset has both the following training and validation datasets: D(o) = D(o) t D(o) v , D(o) t D(o) v = , D(n) = D(n) t D(n) v , and D(n) t D(n) v = , where D( ) t and D( ) v are the training and validation datasets, respectively.' However, it does not provide specific percentages or sample counts for these validation splits.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. It only mentions the software framework used.
Software Dependencies No The paper states 'We used the Caffe framework for implementing our algorithm and baseline methods (Jia et al. 2014),' but does not specify a version number for Caffe or any other software dependencies with version numbers.
Experiment Setup Yes Table 4: Parameters used in experiments. Exp. type Tiny Tiny Realistic Realistic Domain type Old New Old New mini-batch size 100 100 128 64 learning rate (lr) 0.01 0.0001 0.01 0.001 lr policy step fix step fix decay 0.1 0.1 step size 20000 20000 max iter 40000 10000 100000 1000 momentum 0.9 0.9 0.9 0.9 weight decay 0.004 0.004 0.0005 0.0005