Learning to Learn By Self-Critique

Authors: Antreas Antoniou, Amos J. Storkey

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that SCA offers substantially reduced errorrates compared to baselines which only adapt on the support-set, and results in state of the art benchmark performance on Mini-Image Net and Caltech-UCSD Birds 200.
Researcher Affiliation Academia Antreas Antoniou University of Edinburgh {a.antoniou}@sms.ed.ac.uk Amos Storkey University of Edinburgh {a.storkey}@ed.ac.uk
Pseudocode Yes Algorithm 1 SCA Algorithm combined with MAML
Open Source Code No The paper does not provide an explicit statement or link for open-source code related to the described methodology.
Open Datasets Yes We evaluate the proposed method on the established few-shot learning benchmarks of Mini Image Net (Ravi and Larochelle, 2016) and Caltech-UCSD Birds 200 (CUB) (Chen et al., 2019).
Dataset Splits Yes The dataset is split into 3 subsets beforehand, the meta-training, meta-validation and the meta-test sets, used for training, validation and testing respectively.
Hardware Specification No The paper does not provide specific hardware details (like exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes The paper describes specific experimental setup details such as network architecture, regularization parameters (dropout probability of 0.5, weight decay rate 2e-05), and training steps (five SGD steps, 1 step for critic update). For example, 'The Dense Net is reqularised using dropout after each block (with drop probability of 0.5) and weight decay (with decay rate 2e-05).'