A Distribution-dependent Analysis of Meta Learning

Authors: Mikhail Konobeev, Ilja Kuzborskij, Csaba Szepesvari

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper is completed by an empirical study of EM. In particular, our experimental results show that the EM algorithm can attain the lower bound as the number of tasks grows, while the algorithm is also successful in competing with its alternatives when used in a representation learning context.
Researcher Affiliation Collaboration Mikhail Konobeev 1 Ilja Kuzborskij 2 Csaba Szepesv ari 1 2 1Computing Science Department, University of Alberta, Edmonton, Alberta, Canada 2Deep Mind, London, United Kingdom.
Pseudocode Yes Algorithm 1 EM procedure to estimate (α, σ2, Σ) Input: Initial parameter estimates ˆE1 = ( ˆα1, ˆσ2 1, ˆΣ1) Output: Final parameter estimates ˆEt = ( ˆαt, ˆσ2 t , ˆΣt)
Open Source Code Yes Implementation of all of our experiments and the required dataset are provided in the supplementary material.
Open Datasets Yes We also conducted experiments on a real world dataset containing information about students in 139 schools in years 1985-1987 (Dua & Graff, 2017, School Dataset).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes The convergence threshold was set to 10 6 while the maximum number of iterations was set to 103.