Less-Forgetful Learning for Domain Expansion in Deep Neural Networks
Authors: Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we prove the effectiveness of our method through experiments on image classification tasks. All datasets used in the paper, will be released on our website for someone s follow-up study. |
| Researcher Affiliation | Academia | Heechul Jung School of EE KAIST heechul@kaist.ac.kr Jeongwoo Ju Division of Future Vehicle KAIST veryju@kaist.ac.kr Minju Jung School of EE KAIST alswn0925@kaist.ac.kr Junmo Kim School of EE KAIST junmo.kim@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1 Less-forgetful (LF) learning |
| Open Source Code | No | The paper states, "All datasets used in the paper, will be released on our website for someone s follow-up study," but does not provide concrete access to the source code for the described methodology. |
| Open Datasets | Yes | We conducted two different experiments for image classification: one using datasets consisting of tiny images (CIFAR10 (Krizhevsky and Hinton 2009), MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011)) and one using a dataset made up of large images (Image Net (Russakovsky et al. 2015)). |
| Dataset Splits | No | The paper states 'Each dataset has both the following training and validation datasets: D(o) = D(o) t D(o) v , D(o) t D(o) v = , D(n) = D(n) t D(n) v , and D(n) t D(n) v = , where D( ) t and D( ) v are the training and validation datasets, respectively.' However, it does not provide specific percentages or sample counts for these validation splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. It only mentions the software framework used. |
| Software Dependencies | No | The paper states 'We used the Caffe framework for implementing our algorithm and baseline methods (Jia et al. 2014),' but does not specify a version number for Caffe or any other software dependencies with version numbers. |
| Experiment Setup | Yes | Table 4: Parameters used in experiments. Exp. type Tiny Tiny Realistic Realistic Domain type Old New Old New mini-batch size 100 100 128 64 learning rate (lr) 0.01 0.0001 0.01 0.001 lr policy step fix step fix decay 0.1 0.1 step size 20000 20000 max iter 40000 10000 100000 1000 momentum 0.9 0.9 0.9 0.9 weight decay 0.004 0.004 0.0005 0.0005 |