Residual Flows for Invertible Generative Modeling
Authors: Ricky T. Q. Chen, Jens Behrmann, David K. Duvenaud, Joern-Henrik Jacobsen
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The resulting approach, called Residual Flows, achieves state-of-the-art performance on density estimation amongst flow-based models, and outperforms networks that use coupling blocks at joint generative and discriminative modeling. Table 1 reports the bits per dimension (log2 p(x)/d where x Rd) on standard benchmark datasets MNIST, CIFAR-10, downsampled Image Net, and Celeb A-HQ. |
| Researcher Affiliation | Academia | University of Toronto1, University of Bremen2, Vector Institute3 |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for their method or a direct link to a code repository. |
| Open Datasets | Yes | Table 1 reports the bits per dimension (log2 p(x)/d where x Rd) on standard benchmark datasets MNIST, CIFAR-10, downsampled Image Net, and Celeb A-HQ. |
| Dataset Splits | No | The paper mentions using standard benchmark datasets like MNIST and CIFAR-10, but it does not explicitly provide specific details about training, validation, and test splits (e.g., percentages or sample counts) within the main text. |
| Hardware Specification | No | The paper mentions using 'GPUs' and '4 GPUs' for training, but it does not specify any particular GPU models (e.g., NVIDIA A100, Tesla V100), CPU models, or other detailed hardware specifications. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with their version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x) that are required to replicate the experiment. |
| Experiment Setup | Yes | We also increase the bound on the Lipschitz constants of each weight matrix to 0.98, whereas Behrmann et al. (2019) used 0.90 to reduce the error of the biased estimator. More detailed description of architectures is in Appendix E. [...] Residual Flow models are trained with the standard batch size of 64 and converges in roughly 300-350 epochs for MNIST and CIFAR-10. |