Learning To Learn Around A Common Mean
Authors: Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Preliminary experiments confirm our theoretical findings, highlighting the advantage of our approach, with respect to independent task learning. and 5 Experiments We report the empirical evaluation of our LTL estimator on synthetic and real data |
| Researcher Affiliation | Academia | Giulia Denevi1,2, Carlo Ciliberto3,4, Dimitris Stamos4 and Massimiliano Pontil1,4 1Istituto Italiano di Tecnologia (Italy), 2University of Genoa (Italy), 3Imperial College of London (UK), 4University College of London (UK) |
| Pseudocode | Yes | Algorithm 1 The Splitting Stochastic Meta-algorithm |
| Open Source Code | Yes | The code used for the following experiments is available at https://github.com/dstamos |
| Open Datasets | Yes | We compared the performance of LTL and ITL on the School dataset (see [2]), which contains 139 tasks of dimension d = 26 each. |
| Dataset Splits | Yes | We used 50 tasks to perform model selection and 200 tasks for test. and For all the experiments we randomly sampled 75 tasks for training, 25 for validation and the rest for test. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud resources) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, library names and versions). |
| Experiment Setup | Yes | In the experiments, λ and the splitting parameter r were tuned by cross-validation (see App. F for more details). Specifically, we considered 20 candidate values of λ in the range [10 6, 102], logarithmic spacing. |