Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Property Controllable Variational Autoencoder via Invertible Mutual Dependence
Authors: Xiaojie Guo, Yuanqi Du, Liang Zhao
ICLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative evaluations con๏ฌrm that the PCVAE outperforms the existing models by up to 28% in capturing and 65% in manipulating the desired properties. |
| Researcher Affiliation | Academia | Xiaojie Guo Department of IST George Mason University Fairfax, VA 22030, USA EMAIL Yuanqi Du Department of CS George Mason University Fairfax, VA 22030, USA EMAIL Department of CS Emory University Atlanta, GA 30322, USA EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for the proposed PCVAE is available at:https://github.com/xguo7/PCVAE. |
| Open Datasets | Yes | The d Sprites dataset (Matthey et al., 2017) consists of 2D shapes procedurally generated from ground truth independent semantic factors. The 3Dshapes dataset (Burgess & Kim, 2018) consists of 3D shapes procedurally generated from ground truth independent semantic factors. The QM9 dataset (Ramakrishnan et al., 2014) consists of 134k stable small organic molecules |
| Dataset Splits | No | For the d Sprites, 3Dshapes, and QM9 datasets, the paper specifies "training/testing set split" but does not explicitly mention a separate validation set split or its size/methodology. |
| Hardware Specification | Yes | All experiments were conducted on a 64-bit machine with an NVIDIA GPU (GTX 1080 Ti, 11016 MHz, 11 GB GDDR5). |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for dependencies, only general mentions of MLPs, CNNs, GNNs, and spectral normalization. |
| Experiment Setup | Yes | The architectures and hyper-parameters can be found in Appendix B. (In Appendix B, Table 6 specifies 'Learning rate', 'Batch size', 'ฮฑ', 'ฯ', 'Num iteration', 'c (spectral norm)' for d Sprites and QM9 datasets). |