Automated Relational Meta-learning

Authors: Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, Zhenhui Li

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
Researcher Affiliation Collaboration Huaxiu Yao1 , Xian Wu2, Zhiqiang Tao3, Yaliang Li4, Bolin Ding4, Ruirui Li5, Zhenhui Li1 1Pennsylvania State University, 2University of Notre Dame, 3Northeastern University 4Alibaba Group, 5University of California, Los Angeles
Pseudocode Yes Algorithm 1 Meta-Training Process of ARML; Algorithm 2 Meta-Testing Process of ARML
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology or provide a link to a code repository.
Open Datasets Yes In 2D regression problem, we adopt the similar regression problem settings as (Finn et al., 2018; Vuorio et al., 2018; Yao et al., 2019b; Rusu et al., 2019)... four fine-grained image classification datasets are included (i.e., CUB-200-2011 (Bird), Describable Textures Dataset (Texture), FGVC of Aircraft (Aircraft), and FGVCx-Fungi (Fungi))... mini Imagenet and tiered Imagenet (Ren et al., 2018).
Dataset Splits Yes Following the traditional meta-learning settings, all datasets are divided into meta-training, metavalidation and meta-testing classes.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like GRU, GCN, convolutional layers, and Open CV, but does not provide specific version numbers for any of these. For example, it says 'we use the GRU as the encoder and decoder' but not 'PyTorch 1.x' or 'TensorFlow 2.x'.
Experiment Setup Yes In 2D regression problem, we set the inner-loop stepsize (i.e., α) and outer-loop stepsize (i.e., β) as 0.001 and 0.001, respectively. The embedding function E is set as one layer with 40 neurons. The autoencoder aggregator is constructed by the gated recurrent structures. We set the meta-batch size as 25 and the inner loop gradient steps as 5. In few-shot image classification, for both Plain-Multi and Art-Multi datasets, we set the corresponding inner stepsize (i.e., α) as 0.001 and the outer stepsize (i.e., β) as 0.01. The meta-batch size is set as 4. For the inner loop, we use 5 gradient steps. The number of vertices of meta-knowledge graph for Plain-Multi and Art-Multi datasets are set as 4 and 8, respectively.