The Convolution Exponential and Generalized Sylvester Flows
Authors: Emiel Hoogeboom, Victor Garcia Satorras, Jakub Tomczak, Max Welling
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows. In addition, we show that Convolutional Sylvester Flows improve performance over residual flows as a generative flow model measured in log-likelihood. |
| Researcher Affiliation | Academia | Emiel Hoogeboom Uv A-Bosch Delta Lab University of Amsterdam The Netherlands e.hoogeboom@uva.nl Victor Garcia Satorras Uv A-Bosch Delta Lab University of Amsterdam The Netherlands v.garciasatorras@uva.nl Jakub M. Tomczak Vrije Universiteit Amsterdam The Netherlands j.m.tomczak@vu.nl Max Welling Uv A-Bosch Delta Lab University of Amsterdam The Netherlands m.welling@uva.nl |
| Pseudocode | Yes | Algorithm 1 Implicit matrix exponential |
| Open Source Code | Yes | Code for our method can be found at: https://github.com/ehoogeboom/convolution_exponential_and_sylvester |
| Open Datasets | Yes | The performance is compared in terms of negative ELBO and negative log-likelihood (NLL) which is approximated with 1000 importance weighting samples. Values are reported in bits per dimension on CIFAR10. |
| Dataset Splits | No | The paper mentions 'training' and 'test' for evaluation but does not specify explicit dataset splits like percentages, sample counts, or citations to predefined splits for reproduction. |
| Hardware Specification | Yes | The timing experiments are run using four NVIDIA GTX 1080Ti GPUs for training and a single GPU for sampling. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | Yes | In experiments we normalize the convolutional layer using a 2 coefficient of 0.9 and we find that expanding around 6 terms of the series is generally sufficient. |