Meta-Learning with Neural Tangent Kernels

Authors: Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS We conduct a set of experiments to evaluate the effectiveness of our proposed methods, including a sine wave regression toy experiment, few-shot classification, robustness to adversarial attacks, out-of-distribution generalization and ablation study. ... Table 2: Few-shot classification results on Mini-Image Net and FC-100.
Researcher Affiliation Academia Yufan Zhou , Zhenyi Wang , Jiayi Xian, Changyou Chen, Jinhui Xu Department of Computer Science and Engineering, State University of New York at Buffalo {yufanzho,zhenyiwa,jxian,changyou,jinhui}@buffalo.edu
Pseudocode Yes A ALGORITHMS Our proposed algorithms for meta-learning in the RKHS are summarized in Algorithm 1. Algorithm 1 Meta-Learning in RKHS
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository for their methods.
Open Datasets Yes For this experiment, we choose two popular datasets adopted for meta-learning: Mini-Image Net and FC-100 (Oreshkin et al., 2018). ... The CUB (Wah et al., 2011) and VGG Flower Nilsback & Zisserman (2008) are fine-grained datasets used in this experiment...
Dataset Splits Yes We follow Lee et al. (2020) to split these datasets into meta training/validation/testing sets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for running the experiments.
Software Dependencies No The paper mentions 'The Adam optimizer (Kingma & Ba, 2015) is used' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Similar to Finn et al. (2017), the model architecture is set to be a four-layer convolutional neural network with Re LU activation. The filter number is set to be 32. The Adam optimizer (Kingma & Ba, 2015) is used to minimize the energy functional. Meta batch size is set to be 16 and learning rates are set to be 0.01 for Meta-RKHS-II.