Residual Neural Processes

Authors: Byung-Jun Lee, Seunghoon Hong, Kee-Eung Kim4545-4552

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that the RNP shows faster convergence and better performance, both qualitatively and quantitatively. and Experimental Results Following the training method of an NP, we train on multiple realizations of the underlying data generating process.
Researcher Affiliation Academia Byung-Jun Lee,1 Seunghoon Hong,1 Kee-Eung Kim1,2 1School of Computing, KAIST, Republic of Korea 2Graduate School of AI, KAIST, Republic of Korea
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code used for experiments can be found at : https://github.com/dlqudwns/Residual-Neural-Process
Open Datasets Yes We trained and compared BLL, ANP, and RNP models on MNIST (Le Cun et al. 1998) and sub-sampled 32 32 Celeb A (Liu et al. 2015). and The functions to train are generated from a Gaussian Process with a squared exponential kernel and small likelihood noise, with hyper-parameters fixed.
Dataset Splits Yes The number of contexts and the number of targets is chosen randomly (|C|, |T| U[3, 100]). Both XC and XT are also drawn uniformly in [ 20, 20]. and We used random sizes of contexts and targets (|C|, |T| U[3, 200]).
Hardware Specification No The paper mentions 'Wall-clock time' in the experimental results but does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and an 'ANP structure' from a previous work, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Adam optimizer with a learning rate of 5e-5 is used throughout all experiments. and In this experiment, we used dh = 150. and dh = 250 is used in this experiment.