Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks

Authors: Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang5021-5028

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on the mini Image Net and tiered Image Net datasets demonstrate the effectiveness and efficiency of the proposed method, improving the performance by 42.8% compared with state-of-the-art on the mini Image Net 5-way 1-shot classification task.
Researcher Affiliation Academia 1The University of Queensland, Australia 2Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, China 3Pengcheng Laboratory, Shenzhen, China 4University of Electronic Science and Technology of China, China
Pseudocode Yes Algorithm 1 Meta-training of the Proposed CML-BGNN.
Open Source Code Yes Our source code1 is implemented based on Pytorch. ... 1https://github.com/Luoyadan/BGNN-AAAI
Open Datasets Yes mini Image Net is the subset of the ILSVRC-12 dataset... We follow the class split used by (Ravi and Larochelle 2017)... tiered Image Net is a larger subset of ILSVRC-2012...
Dataset Splits Yes We follow the class split used by (Ravi and Larochelle 2017), where 64 classes are used for training, 16 for validation, and 20 for testing.
Hardware Specification Yes All experiments are conducted on a server with two Ge Force GTX 1080 Ti and two GTX 2080 Ti GPUs.
Software Dependencies No The paper mentions 'Pytorch' as the implementation framework, but does not provide specific version numbers for Pytorch or any other software dependencies.
Experiment Setup Yes The mini-batch size for all graph-based models is 80 and 64 for 1-shot and 5-shot experiments, respectively. The proposed model was trained by Adam optimizer with an initial learning rate η of 1 10 3 and weight decay of 1 10 6. The dropout rate is set to 0.3 and the loss coefficient γ is set to 1.