How Important is the Train-Validation Split in Meta-Learning?

Authors: Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, Caiming Xiong

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theories by experimentally showing that the train-train method can indeed outperform the train-val method, on both simulations and real meta-learning tasks.
Researcher Affiliation Collaboration 1Salesforce Research 2Georgia Tech 3Princeton University 4University of Washington.
Pseudocode No The paper describes algorithms mathematically and in text but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository.
Open Datasets Yes We experiment on mini Image Net (Ravi & Larochelle, 2017) and tiered Image Net (Ren et al., 2018) datasets.
Dataset Splits Yes Mini Image Net consists of 100 classes of images from Image Net (Krizhevsky et al., 2012) and each class has 600 images of resolution 84 84 3. We use 64 classes for training, 16 classes for validation, and the remaining 20 classes for testing (Ravi & Larochelle, 2017).
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries).
Experiment Setup Yes We measure the performance of the train-train and trainval methods using the ℓ2-error w0, bw{tr-tr,tr-val} 0,T 2 2. Across all simulations, we optimally tune the regularization coefficient λ in the train-train method, and use a sufficiently large λ = 2000 in the train-val method (according to Lemma D.1). ...In each N-way K-shot setting (in Table 1), in meta-test, each task provides an N-way K-shot dataset for the model adaptation. In meta-training, for each task we sample an N-way (K + 1)-shot dataset (and does not allow the algorithm to tune the size of this dataset), so that each task only has n = N(K + 1) examples, and we allow the algorithm to tune n1 [0, n]3. Table 1 uses the default choice of an even split n1 = n2 = n/2 following (Zhou et al., 2019; Rajeswaran et al., 2019). For example, for a 5-way 5-shot classification setting, each task contains 5 (5 + 1) = 30 total images, and we set n1 = n2 = 15.