Bridge the Inference Gaps of Neural Processes via Expectation Maximization

Authors: Qi Wang, Marco Federici, Herke van Hoof

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS AND ANALYSIS In this section, two central questions are answered: (i) can variational EM based models SI-NPs, achieve a better local optimum than vanilla NPs? (ii) what is the role of randomness in functional priors? Specifically, we examine the influence of NPs optimization objectives on typical downstream tasks and understand the functional prior quantitatively.
Researcher Affiliation Academia AMLab, University of Amsterdam, 1098XH, Amsterdam, the Netherlands
Pseudocode Yes Algorithm 1: Variational Expectation Maximization for NPs.
Open Source Code Yes To enable researchers to implement our developed method in studies, we leave the anonymous Github link here: https://anonymous.4open.science/r/SI_NPs-C832, where we provide an example of SI-NPs. The full code implementations of our method will be released in the final version.
Open Datasets Yes Image Datasets. Benchmark image datasets include MNIST (Bottou et al., 1994), FMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009) and SVHN (Sermanet et al., 2012).
Dataset Splits No The paper mentions “meta training” and “meta testing” phases and “early stop is used when it reaches convergence” which implies a validation set, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or counts).
Hardware Specification Yes In this project, we use NVIDIA 1080-Ti GPUs to finish all experiments.
Software Dependencies No Pytorch works as the toolkit to program and run experiments. No specific version number is provided for Pytorch or any other library.
Experiment Setup Yes The dimension of latent variables is 128. The Encoder is a two hidden layer neural network with 128 neuron units for each layer. The Decoder is a one hidden layer neural network with 128 neuron units. The optimizer s learning rate is 5e 4. For all methods, we sample 100 tasks as one batch to train in each iteration, and the number of iteration steps in meta training is 100000.