Learning Shared Knowledge for Deep Lifelong Learning using Deconvolutional Networks
Authors: Seungwon Lee, James Stokes, Eric Eaton
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two computer vision data sets show that the DF-CNN achieves superior performance in challenging lifelong learning settings, resists catastrophic forgetting, and exhibits reverse transfer to improve previously learned tasks from subsequent experience without retraining. |
| Researcher Affiliation | Collaboration | 1University of Pennsylvania, Philadelphia, PA, USA 2Flatiron Institute, New York, NY, USA leeswon@seas.upenn.edu, jstokes@flatironinstitute.org, eeaton@cis.upenn.edu |
| Pseudocode | Yes | Algorithm 1 DF-CNN (λ, kb Size, transform Size) |
| Open Source Code | No | The paper mentions that 'The online appendix is available on the third author s website at http://www.seas.upenn.edu/ eeaton/papers/Lee2019Learning.pdf', which is a PDF document, not source code for the methodology. |
| Open Datasets | Yes | We generated two lifelong learning problems using the CIFAR-100 [Krizhevsky and Hinton, 2009] and Office-Home [Venkateswara et al., 2017] data sets. |
| Dataset Splits | Yes | For CIFAR-100, we created a series of 10 image classification tasks... and split it into training and validation sets in the ratio 5.6:1 (170 training and 30 validation instances per task). ... The Office-Home dataset... we randomly split the data into those with a 60%, 10%, and 30% ratio, respectively. This results in approximately 550 training, 90 validation, and 250 test instances. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | No | Details of training process, architecture of the networks and hyper-parameters used for each data set are described in Appendix B. Since these details are in an appendix, they are not presented in the main text of the paper. |