Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Ornstein Auto-Encoders
Authors: Youngwon Choi, Joong-Ho Won
IJCAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that OAEs successfully separate individual sequences in the latent space, and can generate new variations of unknown, as well as known, identity. |
| Researcher Affiliation | Academia | Department of Statistics, Seoul National University, Republic of Korea |
| Pseudocode | Yes | Algorithm 1 Ornstein Auto-Encoder for Exchangeable Data |
| Open Source Code | No | Details of implementation are given in the Online Supplement available at https://tinyurl.com/y5x6ufuj. |
| Open Datasets | Yes | Consider the VGGFace2 dataset [Cao et al., 2018], an expansion of the famous VGGFace dataset [Parkhi et al., 2015]. and The images of the MNIST dataset show strong correlations within a digit. |
| Dataset Splits | No | The paper describes training and test sets for both VGGFace2 and MNIST, but does not explicitly mention a separate validation split or dataset. |
| Hardware Specification | No | No specific hardware details (like GPU or CPU models) are mentioned in the paper. |
| Software Dependencies | No | The paper mentions optimizers and normalization techniques but does not provide specific version numbers for any software dependencies or libraries (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | We chose dz = 128 as the latent space dimension, and used hyperparameters ยต0 = 0, ฯ2 0 = 1, ฯ 2 0 = 100. The encoder-decoder architecture had 13.6M parameters and the discriminator had 12.8M parameters. We set ฮป1 = 10, ฮป2 = 10 for OAE, and ฮป = 10 for WAE and c AAE. All models were trained for 100 epochs with a constant learning rate of 0.0005 for the encoder and decoder, and 0.001 for the discriminator. We used mini-batches of size 200. |