Bayesian Meta Sampling for Fast Uncertainty Adaptation

Authors: Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster uncertainty adaption compared to related methods.
Researcher Affiliation Academia Zhenyi Wang 1, Yang Zhao 1, Ping Yu 1, Ruiyi Zhang 2, Changyou Chen 1 1 State University of New York at Buffalo 2 Duke University 1 {zhenyiwa, yzhao63, pingyu, changyou}@buffalo.edu 2 ryzhang@cs.duke.edu
Pseudocode Yes Algorithm 1 Meta training, MAML with ELBO. Algorithm 2 Meta testing, MAML with ELBO.
Open Source Code Yes Our code is made available at: https://github.com/zheshiyige/meta-sampling.git.
Open Datasets Yes MNIST and CIFAR-10 (Krizhevsky, 2009). and UCI repository: Australian (15 features, 690 samples), German (25 features, 1000 samples), Heart (14 features, 270 samples).
Dataset Splits Yes For each task, the dataset Dτ is divided into two sets Dtr τ {Xtr τ , ytr τ } and Dval τ {Xval τ , yval τ }. and Mini-Imagenet dataset, consisting of 64, 16, and 20 classes for training, validation and testing, respectively.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or types of computing resources used for the experiments.
Software Dependencies No The paper mentions software like Adam, RBF kernel, and Pytorch MAML implementation, but does not provide specific version numbers for any of them.
Experiment Setup Yes The model is trained with Adam (Kingma & Ba, 2015) with a learning rate of 0.005. We use 1000 particles to approximate the distribution. We set the weight λ in the WGF to be 1e-4. The meta sampler is trained for 1000 iterations. and We set the batch size to be 128, and the learning rate to be 0.002.