Woodbury Transformations for Deep Generative Flows

Authors: You Lu, Bert Huang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare the performance of Woodbury transformations against other modern flow architectures, measuring running time, bit per-dimension (log2-likelihood), and sample quality. We train with the CIFAR-10 [23] and Image Net [31] datasets.
Researcher Affiliation Academia You Lu Department of Computer Science Virginia Tech Blacksburg, VA you.lu@vt.edu Bert Huang Department of Computer Science Tufts University Medford, MA bert@cs.tufts.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide a statement about releasing source code or a direct link to a code repository for the described methodology.
Open Datasets Yes We train with the CIFAR-10 [23] and Image Net [31] datasets. We train Glow and Woodbury-Glow on the Celeb A-HQ dataset [19].
Dataset Splits No The paper mentions training and evaluating on specific datasets (CIFAR-10, ImageNet, Celeb A-HQ) and reports 'test-set likelihoods' but does not explicitly provide the specific training, validation, and test dataset splits (e.g., percentages, sample counts, or clear citation to a predefined split methodology).
Hardware Specification Yes For fair comparison, we implement all methods in Pytorch and run them on an Nvidia Titan V GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not provide specific version numbers for Pytorch or any other software dependencies.
Experiment Setup Yes For 32 32 images, we set the number of levels to L = 3 and the number of steps per-level to K = 8. For 64 64 images, we use L = 4 and K = 16. More details are in the appendix. For Woodbury transformations, we fix the latent dimension d = 16. We use 5bit images and set the size of images to be 64 64, 128 128, and 256 256. Detailed parameter settings are in the appendix.