Bootstrapping neural processes

Authors: Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of BNP on various types of data and its robustness in the presence of model-data mismatch. In this section, we compare the baseline NP classes (CNP, NP, CANP, and ANP) to our models (BNP, BANP) on both synthetic and real-world datasets.
Researcher Affiliation Collaboration KAIST1, Daejeon, South Korea, AITRICS2, Seoul, South Korea, POSTECH3, Pohang, South Korea, University of Oxford4, Oxford, England
Pseudocode No The paper describes the methodology in prose, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code of the described methodology.
Open Datasets Yes We trained the models for EMNIST using the first 10 classes and tested on the remaining 37 classes. [3] for EMNIST and [15] for Celeb A
Dataset Splits Yes We train with 4000 tasks and validate with 200 tasks. During testing, we evaluate 100 tasks.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library names like PyTorch 1.x or Python 3.x).
Experiment Setup Yes We fixed k = 4 for all of our experiments. We train for 1000 epochs with Adam optimizer, using learning rate 10e-3. We use early stopping with patience 200.