Subspace Learning for Effective Meta-Learning
Authors: Weisen Jiang, James Kwok, Yu Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on regression and classification metalearning datasets verify the effectiveness of the proposed algorithm. |
| Researcher Affiliation | Academia | 1Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology 2Department of Computer Science and Engineering, Hong Kong University of Science and Technology 3Peng Cheng Laboratory. |
| Pseudocode | Yes | The proposed procedure is shown in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement or link confirming the availability of its source code. |
| Open Datasets | Yes | We use three meta-datasets: (i) Meta-Dataset-BTAF, proposed in (Yao et al., 2019), which consists of four image classification datasets: (a) Bird; (b) Texture; (c) Aircraft; and (d) Fungi. (ii) Meta-Dataset-ABF, proposed in (Zhou et al., 2021a), which consists of Aircraft, Bird, and Fungi. (iii) Meta-Dataset-CIO, which consists of three widely-used few-shot datasets: CIFAR-FS (Bertinetto et al., 2018), mini-Image Net (Vinyals et al., 2016), and Omniglot (Lake et al., 2015). |
| Dataset Splits | Yes | We use the meta-training/meta-validation/meta-testing splits in (Yao et al., 2020; Zhou et al., 2021a; Lake et al., 2015). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not specify the versions of any software libraries, frameworks, or programming languages used for implementation. |
| Experiment Setup | Yes | For the meta-learner, the initial learning rate is 0.001, which is then reduced by half every 5, 000 iterations. The base learner uses a learning rate of α = 0.05, v(0) = 1m1, and Tin is 5 (resp. 20) at meta-training (resp. meta-testing). The temperature is γt = max(10 5, 0.5 t/T), a linear annealing schedule as in (Chen et al., 2020; Zhou et al., 2021b). |