Generative vs. Discriminative: Rethinking The Meta-Continual Learning

Authors: Mohammadamin Banayeeanzade, Rasoul Mirzaiezadeh, Hosein Hasani, Mahdieh Soleymani

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental extensive experiments on standard benchmarks demonstrate the effectiveness of the proposed method.
Researcher Affiliation Academia Mohammadamin Banayeeanzade , Rasoul Mirzaiezadeh , Hosein Hasani , Mahdieh Soleymani Baghshah Department of Computer Engineering Sharif University of Technology m.banayeean@gmail.com, mirzaierasoul75@gmail.com hasanih@ce.sharif.edu, soleymani@sharif.edu
Pseudocode No The paper describes its methods using prose and mathematical formulations but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes The code is publicly available at https://github.com/aminbana/Ge MCL.
Open Datasets Yes We have performed our experiments on Omniglot [27], Mini-Image Net [51], and CIFAR-100 [25] datasets.
Dataset Splits Yes We use 763 and 200 classes for meta-train and meta-validation respectively, and others for meta-test. ... Mini-Image Net... is divided into 64, 16, 20 classes for train, validation, and test meta-phases respectively. ... CIFAR-100 dataset... we use 70 and 30 classes for meta-train and meta-test phases respectively.
Hardware Specification No The paper does not explicitly mention any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) used for the experiments.
Experiment Setup Yes The training is done with a learning rate of 0.001, decaying to half every 0.1 of the training length. ... For this dataset, we used 20-way 10-shot and 20-way 30-shot train and validation episodes respectively with 30 query samples for both.