Multi-objects Generation with Amortized Structural Regularization
Authors: Taufik Xu, Chongxuan LI, Jun Zhu, Bo Zhang
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that ASR outperforms the DGM baselines in terms of inference performance and sample quality. |
| Researcher Affiliation | Academia | Kun Xu, Chongxuan Li, Jun Zhu , Bo Zhang Dept. of Comp. Sci. & Tech., Institute for AI, THBI Lab, BNRist Center, State Key Lab for Intell. Tech. & Sys., Tsinghua University, Beijing, China {kunxu.thu, chongxuanli1991}@gmail.com, {dcszj, dcszb}@tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Our code is attached in the supplementary materials for reproducing. |
| Open Datasets | Yes | In this section, we present the empirical results of ASR on two dataset: Multi-MNIST [8] and Multi-Sprites [12] |
| Dataset Splits | No | The paper specifies training and test data sizes (e.g., '40000 training samples' and '2000 images are used as the test data'), but it does not explicitly state a validation dataset split. |
| Hardware Specification | No | The paper mentions implementing the model using TensorFlow but provides no specific details about the hardware used for running experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | We implement our model using Tenwor Flow [1] library. (The paper mentions 'Tenwor Flow' but does not specify a version number for this or any other software dependency.) |
| Experiment Setup | Yes | We use the Adam optimizer [18] with learning rate as 0.001, β1 = 0.9, and β2 = 0.999. We train models with 300 epochs with batch size as 64. |