Neural Variational Dropout Processes

Authors: Insu Jeon, Youngjin Park, Gunhee Kim

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compared the proposed method with other meta-learning approaches in the few-shot learning tasks such as 1D stochastic regression, image inpainting, and classification. The results show the excellent performance of NVDPs.
Researcher Affiliation Collaboration Insu Jeon1, Youngjin Park2, Gunhee Kim1 1Seoul National University; 2Everdoubling LLC., Seoul, South Korea
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct statement or link to its own open-source code for the described methodology.
Open Datasets Yes The paper explicitly states the use of well-known datasets: 'GP Dataset', 'MNIST (Le Cun et al., 1998)', 'Celeb A (Liu et al., 2015)', 'Omniglot (Lake et al., 2015)', and 'Mini Imagenet (Ravi & Larochelle, 2017)'.
Dataset Splits Yes And the task data points are split into a disjoint sample of m contexts and n targets as m U(3, 97) and n U[m + 1, 100), respectively. In the test or validation, the numbers of contexts and targets were chosen as m U(3, 97) and n = 400 m, respectively.
Hardware Specification No The paper does not specify any hardware details such as CPU/GPU models or memory used for the experiments.
Software Dependencies No The paper mentions 'Adam optimizer (Kingma & Ba, 2015)' but does not provide specific version numbers for Adam or any other software dependencies.
Experiment Setup Yes The agent NN with 4 hidden layers of 128 units with Le LU activation... All models were trained with Adam optimizer (Kingma & Ba, 2015) with learning rate 5e-4 and 16 task-batches for 0.5 million iterations. ... Adam optimizer with a learning rate 4e-4 and 16 task batches with 300 epochs were used for training.