Variational Autoencoder with Arbitrary Conditioning
Authors: Oleg Ivanov, Michael Figurnov, Dmitry Vetrov
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples. |
| Researcher Affiliation | Collaboration | Oleg Ivanov Samsung AI Center Moscow Moscow, Russia tigvarts@gmail.com Michael Figurnov National Research University Higher School of Economics Moscow, Russia michael@figurnov.ru Dmitry Vetrov Samsung-HSE Laboratory, National Research University Higher School of Economics Samsung AI Center Moscow Moscow, Russia vetrovd@yandex.ru |
| Pseudocode | No | The paper describes the model and its learning procedure using mathematical formulas and text, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/tigvarts/ vaeac. |
| Open Datasets | Yes | UCI datasets collection (Lichman, 2013)... MNIST (Le Cun et al., 1998), Omniglot (Lake et al., 2015) and Celeb A (Liu et al., 2015) datasets |
| Dataset Splits | Yes | We split the dataset into train and test set with size ratio 3:1. ... use 25% of training data as validation set to select the best model among all epochs of training. |
| Hardware Specification | No | The paper mentions 'training time' and 'neural networks' which implies the use of computational hardware, but it does not specify any particular GPU or CPU models, memory details, or other specific hardware configurations used for the experiments. |
| Software Dependencies | No | The paper mentions software like 'Py Torch', 'Tensor Flow', and 'Adam' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | In all experiments we use optimization method Adam (Kingma & Ba, 2014)... We use 16 latent variables... We train model for 50 epochs... We use 32 latent variables... At the training stage we used a rectangle mask with uniprobable random corners. We reject masks with width or height less than 16pt. |