Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory
Authors: Ron Amit, Ron Meir
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we demonstrate the performance of our transfer method with image classification tasks solved by deep neural networks. In image classification, the data samples, z (x, y), consist of a an image, x, and a label, y. The hypothesis class hw : w Rd is the set of neural networks with a given architecture (which will be specified later). As a loss function ℓ(hw, z) we will use the cross-entropy loss. |
| Researcher Affiliation | Academia | 1The Viterbi Faculty of Electrical Engineering, Technion Israel Institute of Technology, Haifa, Israel. Correspondence to: Ron Amit <ronamit@campus.technion.ac.il>, Ron Meir <rmeir@ee.technion.ac.il>. |
| Pseudocode | Yes | Both algorithms are described in pseudo-code in the supplementary material (section A.4) 11 12. |
| Open Source Code | Yes | 11Code is available at: https://github.com/ ron-amit/meta-learning-adjusting-priors. |
| Open Datasets | Yes | We conduct two experiments with two different task environments, based on augmentations of the MNIST dataset (Le Cun, 1998). |
| Dataset Splits | No | The paper mentions 'meta-training set' and 'meta-test task' sizes but does not specify a separate validation set or its split information. |
| Hardware Specification | Yes | We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. |
| Software Dependencies | No | The paper mentions using deep neural networks but does not provide specific software dependencies or their version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | No | The paper states 'See section A.5 for more implementation details,' indicating that specific experimental setup details like hyperparameters are not present in the main text. |