Continual Learning with Deep Generative Replay
Authors: Hanul Shin, Jung Kwon Lee, Jaehong Kim, Jiwon Kim
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our methods in several sequential learning settings involving image classification tasks. |
| Researcher Affiliation | Collaboration | Hanul Shin Massachusetts Institute of Technology SK T-Brain skyshin@mit.edu Jung Kwon Lee , Jaehong Kim , Jiwon Kim SK T-Brain {jklee,xhark,jk}@sktbrain.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | We tested our model on classifying MNIST handwritten digit database [19]. sequentially trained our model on classifying MNIST and Street View House Number (SVHN) dataset [25] |
| Dataset Splits | No | The paper mentions 'test data' and 'training' but does not explicitly describe training/test/validation dataset splits or mention a specific validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions techniques like WGAN-GP and GANs framework but does not provide specific software dependencies with version numbers (e.g., library names with versions). |
| Experiment Setup | No | The paper describes general training procedures and concepts like 'learning rates' and 'fine-tuning' but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings. |