Whitening Convergence Rate of Coupling-based Normalizing Flows

Authors: Felix Draxler, Christoph Schnörr, Ullrich Köthe

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate the implications of our theory and point at open questions.
Researcher Affiliation Academia Felix Draxler Heidelberg University felix.draxler@iwr.uni-heidelberg.de Christoph Schnörr Heidelberg University schnoerr@math.uni-heidelberg.de Ullrich Köthe Heidelberg University ullrich.koethe@iwr.uni-heidelberg.de
Pseudocode No The paper contains mathematical equations and descriptions of processes, but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code and the generated data and models can be found at: https://github.com/fdraxler/whiten-nf
Open Datasets Yes In an experiment, we fit a set of Glow [6] coupling flows of increasing depths to the EMNIST digit dataset [38] using maximum likelihood loss and measure the capability of each flow in decreasing G and S (Details in Appendix A.1).
Dataset Splits No The paper mentions training and testing on splits of the dataset but does not explicitly specify a validation split or its size/usage.
Hardware Specification Yes All experiments were run on a single NVIDIA RTX 3090 GPU.
Software Dependencies Yes The code is written in Python 3.9 using PyTorch 1.10.
Experiment Setup Yes The model is trained for 200 epochs using the Adam optimizer [45] with a learning rate of 0.0001, a batch size of 512, and an L2 penalty of 10^-5. We use a single affine coupling layer per block followed by a fixed permutation in the rotation layer.