Autoencoding Conditional Neural Processes for Representation Learning

Authors: Victor Prokhorov, Ivan Titov, Siddharth N

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate PPS-VAE over a number of tasks across different visual data, and find that not only can it facilitate better-fit CNPs, but also that the spatial arrangement and values meaningfully characterise image information evaluated through the lens of classification on both within and out-of-data distributions.
Researcher Affiliation Academia 1School of Informatics, University of Edinburgh 2ILLC, University of Amsterdam 3The Alan Turing Institute.
Pseudocode Yes Algorithm 1 PPS-VAE
Open Source Code Yes Correspondence to: Victor Prokhorov <email: victorprokhorov91@gmail.com, code: https://github.com/exlab-research/pps-vae>.
Open Datasets Yes We use four standard vision datasets: FER2013 (Erhan et al., 2013), Celeb A (Liu et al., 2015, Cel A), CLEVR (Johnson et al., 2017) and Tiny Image Net (Mnmoustafa, 2017, t-Image Net) with resolution at 64 64.
Dataset Splits Yes Classifiers trained over three seeds with early stopping, reporting mean F1-macro scores.
Hardware Specification Yes CPU AMD EPYC 7413 24-Core Processor GPU NVIDIA A40 x 1
Software Dependencies No The paper mentions software components like “Adam W” optimizer, “amsgrad,” “Conv Mixer” architecture, and implicitly “PyTorch” (due to common use in deep learning and the nature of the model), but it does not specify version numbers for these software packages, libraries, or the programming language used.
Experiment Setup Yes We optimise the parameters of the model with the Adam W (Loshchilov & Hutter, 2019) optimiser, setting the learning rate to 2 10 4 and we also enable the amsgrad (Reddi et al., 2018). We train the PPS-VAE for 200 epochs.