Amortized Bayesian Meta-Learning
Authors: Sachin Ravi, Alex Beatson
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed model on experiments involving contextual bandits and involving measuring uncertainty in few-shot learning benchmarks. |
| Researcher Affiliation | Academia | Princeton University |
| Pseudocode | Yes | The pseudo-code for training and evaluation is given in Algorithms 1 and 2 in the appendix. |
| Open Source Code | No | The paper does not explicitly state that its own source code is available or provide a link to a repository. |
| Open Datasets | Yes | We consider two few-shot learning benchmarks: CIFAR-100 and mini Image Net, where both datasets consist of 100 classes and 600 images per class... |
| Dataset Splits | Yes | We split the 100 classes into separate sets of 64 classes for training, 16 classes for validation, and 20 classes for testing for both of the datasets (using the split from Ravi & Larochelle (2016) for mini Image Net, while using our own for CIFAR-100 as a commonly used split does not exist). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Optimizer Adam' but does not specify any software dependencies with version numbers (e.g., specific library versions for deep learning frameworks like TensorFlow or PyTorch). |
| Experiment Setup | Yes | Table 3: Hyperparameters for contextual bandit experiments. Table 4: Hyperparameters for our model for few-shot learning experiments. and For the few-shot learning experiments, we found it necessary to downweight the inner KL term for better performance in our model. |