Adversarial Symmetric Variational Autoencoder
Authors: Yuchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, Lawrence Carin
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering, Duke University {yp42, ww109, r.henao, lc267, zg27,cl319, lcarin}@duke.edu |
| Pseudocode | No | The paper describes algorithms and formulations mathematically but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements or links indicating that open-source code for the described methodology is available. |
| Open Datasets | Yes | We evaluate our model on three datasets: MNIST, CIFAR-10 and Image Net. |
| Dataset Splits | No | Early stopping is employed based on average reconstruction loss of x and z on validation sets. The paper mentions using validation sets but does not specify the split percentages, sample counts, or the exact methodology for creating these splits. |
| Hardware Specification | Yes | while our model only requires less than 2 days (4 hours per epoch) for training and 0.01 seconds/image for generating on a single TITAN X GPU. |
| Software Dependencies | No | The paper mentions optimizers like Adam and initialization methods like Xavier, but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | All parameters were initialized with Xavier [36], and optimized via Adam [37] with learning rate 0.0001. We do not perform any dataset-specific tuning or regularization other than dropout [38]. Early stopping is employed based on average reconstruction loss of x and z on validation sets. |