MetaFun: Meta-Learning with Iterative Functional Updates
Authors: Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam Kosiorek, Yee Whye Teh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed model on both few-shot regression and classification tasks. In all experiments that follow, we partition the data into training, validation and test meta-sets, each containing data from disjoint tasks. For quantitative results, we train each model with 5 different random seeds and report the mean and the standard deviation of the test accuracy. For further details on hyperparameter tuning, see the Appendix. All experiments are performed using Tensor Flow (Abadi et al., 2015), and the code is available |
| Researcher Affiliation | Collaboration | 1Department of Statistics, University of Oxford, Oxford, United Kingdom 2Google DeepMind, London, United Kingdom 3Applied AI Lab, Oxford Robotics Institute, University of Oxford, Oxford United Kingdom. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks with explicit labels like "Algorithm" or "Pseudocode". |
| Open Source Code | Yes | A tensorflow implementation of our model is available at github.com/jinxu06/metafun-tensorflow |
| Open Datasets | Yes | We apply our approach to solve meta-learning problems on both regression and classification tasks, and achieve state-of-the-art performance on heavily benchmarked datasets such as mini Image Net (Vinyals et al., 2016) and tiered Image Net (Ren et al., 2018) |
| Dataset Splits | Yes | In all experiments that follow, we partition the data into training, validation and test meta-sets, each containing data from disjoint tasks. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions TensorFlow, but it does not specify a version number for TensorFlow or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | For all experiments, hyperparameters are chosen by training on the training meta-set, and comparing target accuracy on the validation meta-set. We conduct randomised hyperparameters search (Bergstra & Bengio, 2012), and the search space is given in Appendix. |