SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation

Authors: Giorgio Giannone, Ole Winther

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we discuss experimental setup and results for our model. In particular for all the models our interests are: I) Quantitative evaluation of few-shot generation; II) Few-shot conditional and unconditional sampling from the model. III) Transfer of generative capacities to new datasets and input set size. IV) Evidence that the hierarchical context representation is useful for the model.
Researcher Affiliation Academia Giorgio Giannone 1 Ole Winther 1 2 1Technical University of Denmark 2University of Copenhagen.
Pseudocode Yes Algorithm 1 Iterative Sampling, Algorithm 2 Learnable Aggregation (LAG), Algorithm 3 Conditional and Refined Sampling SCHA-VAE
Open Source Code Yes Code : https://github.com/georgosgeorgos/hierarchical-few-shot-generative-models
Open Datasets Yes Datasets. We train models on binarized Omniglot (Lake et al., 2015), Celeb A (Liu et al., 2015) and FSCIFAR100 (Oreshkin et al., 2018)... We perform quantitative evaluation on Omniglot, MNIST (Le Cun, 1998), DOUBLE-MNIST (Sun, 2019), TRIPLE-MNIST (Sun, 2019), Celeb A and FS-CIFAR100.
Dataset Splits Yes Then we split the sets in train/val/test sets. Using these splits we can dynamically create sets of different dimensions, generating a new collection of training sets at each epoch during training. For training we use the episodic approach proposed in (Vinyals et al., 2016).
Hardware Specification No The paper mentions 'computational resources' in the introduction but does not specify any particular hardware details (e.g., GPU/CPU models, memory) used for experiments.
Software Dependencies No The paper mentions software like PyTorch and CUDA but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Training Details. We follow the approach proposed in (Edwards & Storkey, 2016) for training with sets... During training the input set size is always between 2 and 20. Table 5: Relevant Hyperparameters for SCHA-VAE.