EEC: Learning to Encode and Regenerate Images for Continual Learning

Authors: Ali Ayub, Alan Wagner

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested and compared EEC to several SOTA approaches on four benchmark datasets: MNIST, SVHN, CIFAR-10 and Image Net-50. We also report the memory used by our approach and its performance in restricted memory conditions. Finally, we present an ablation study to evaluate the contribution of different components of EEC.
Researcher Affiliation Academia Ali Ayub, Alan R. Wagner The Pennsylvania State University State College, PA, USA, 16803 {aja5755,alan.r.wagner}@psu.edu.edu
Pseudocode Yes A EEC ALGORITHMS The algorithms below describe portions of the complete EEC algorithm. Algorithm 1 is for autoencoder training (Section 3.1 in paper), Algorithm 2 is for memory integration (Section 3.2 in paper), Algorithm 3 is for rehearsal, pseudo-rehearsal and classifier training (Section 3.3 in paper) and Algorithm 4 is for filtering pseudo-images (Section 3.3 in paper).
Open Source Code No The paper does not provide a specific link to an open-source code repository or explicitly state that the code for their methodology is made publicly available.
Open Datasets Yes The MNIST dataset consists of grey-scale images of handwritten digits between 0 to 9, with 50,000 training images, 10,000 validation images and 10,000 test images. CIFAR-10 consists of 50,000 RGB training images and 10,000 test images belonging to 10 object classes. Image Net-50 is a smaller subset of the i LSVRC-2012 dataset containing 50 classes with 1300 training images and 50 validation images per class. All of these are well-known, publicly available benchmark datasets in machine learning.
Dataset Splits Yes The MNIST dataset consists of grey-scale images of handwritten digits between 0 to 9, with 50,000 training images, 10,000 validation images and 10,000 test images. Image Net-50 is a smaller subset of the i LSVRC-2012 dataset containing 50 classes with 1300 training images and 50 validation images per class.
Hardware Specification Yes We used Pytorch (Paszke et al., 2019) and an Nvidia Titan RTX GPU for implementation and training of all neural network models.
Software Dependencies No The paper mentions using “Pytorch (Paszke et al., 2019)” but does not provide a specific version number for PyTorch or other key software components, which is required for reproducibility.
Experiment Setup Yes Hyperparameter values and training details are reported in Appendix C. Table 4: Hyper-parameters for EEC autoencoder training; Table 5: Hyper-parameters for EEC classifier training. These tables specify values for parameters such as number of epochs, learning rate, batch size, optimizer, and weight decay.