Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

Authors: Andrew Foong, Wessel Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, Richard Turner

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the strong performance and generalization capabilities of Conv NPs on 1D regression, image completion, and various tasks with real-world spatio-temporal data.
Researcher Affiliation Collaboration Andrew Y. K. Foong University of Cambridge ykf21@cam.ac.uk Wessel P. Bruinsma University of Cambridge Invenia Labs wpb23@cam.ac.uk Jonathan Gordon University of Cambridge jg801@cam.ac.uk Yann Dubois University of Cambridge yanndubois96@gmail.com James Requeima University of Cambridge Invenia Labs jrr41@cam.ac.uk Richard E. Turner University of Cambridge Microsoft Research ret26@cam.ac.uk
Pseudocode Yes See App C for a full description of the Conv CNP including pseudocode.
Open Source Code Yes 4Code to reproduce the 1D regression experiments can be found at https://github.com/wesselb/Neural Processes.jl, and code to implement the image-completion experiments can be found at https: //github.com/Yann Dubs/Neural-Process-Family.
Open Datasets Yes We evaluate Conv NPs on image completion tasks focusing on spatial generalization. To test this, we consider zero-shot multi MNIST (ZSMM)...
Dataset Splits No No specific dataset split information (exact percentages, sample counts, or detailed splitting methodology for training, validation, and test sets) is provided in the main text. The paper refers to 'context and target sets' and discusses 'within training range' and 'beyond training range' for evaluation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments are provided in the paper. The paper mentions training models but gives no information on the hardware used.
Software Dependencies No No specific ancillary software details, such as library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x), are explicitly provided in the paper. While code repositories are linked, the paper text itself does not list these dependencies.
Experiment Setup No No specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings) are provided in the main text. The paper states 'Full details provided in the supplement' and refers to 'App J' and 'App K' for 'full experimental details', which are outside the provided text.