Ornstein Auto-Encoders

Authors: Youngwon Choi, Joong-Ho Won

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that OAEs successfully separate individual sequences in the latent space, and can generate new variations of unknown, as well as known, identity.
Researcher Affiliation Academia Department of Statistics, Seoul National University, Republic of Korea
Pseudocode Yes Algorithm 1 Ornstein Auto-Encoder for Exchangeable Data
Open Source Code No Details of implementation are given in the Online Supplement available at https://tinyurl.com/y5x6ufuj.
Open Datasets Yes Consider the VGGFace2 dataset [Cao et al., 2018], an expansion of the famous VGGFace dataset [Parkhi et al., 2015]. and The images of the MNIST dataset show strong correlations within a digit.
Dataset Splits No The paper describes training and test sets for both VGGFace2 and MNIST, but does not explicitly mention a separate validation split or dataset.
Hardware Specification No No specific hardware details (like GPU or CPU models) are mentioned in the paper.
Software Dependencies No The paper mentions optimizers and normalization techniques but does not provide specific version numbers for any software dependencies or libraries (e.g., PyTorch, TensorFlow).
Experiment Setup Yes We chose dz = 128 as the latent space dimension, and used hyperparameters µ0 = 0, σ2 0 = 1, τ 2 0 = 100. The encoder-decoder architecture had 13.6M parameters and the discriminator had 12.8M parameters. We set λ1 = 10, λ2 = 10 for OAE, and λ = 10 for WAE and c AAE. All models were trained for 100 epochs with a constant learning rate of 0.0005 for the encoder and decoder, and 0.001 for the discriminator. We used mini-batches of size 200.