Denoising Normalizing Flow

Authors: Christian Horvat, Jean-Pascal Pfister

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate on naturalistic data that our method learns meaningful latent representations without sacrificing the sample quality.
Researcher Affiliation Academia Christian Horvat Department of Physiology University of Bern Bern, Switzerland christian.horvat@unibe.ch Jean-Pascal Pfister Department of Physiology University of Bern Bern, Switzerland jeanpascal.pfister@unibe.ch
Pseudocode Yes DNF Algorithm: Training of Denoising Normalizing Flow for qσ( x|x) = N( x; x, σ2ID).
Open Source Code Yes 4Our main code is available at https://github.com/chrvt/denoising-normalizing-flow and is based on the original M flow implementation made public by the authors of [10] under the MIT license.
Open Datasets Yes Therefore, [10] uses a Style GAN2 model [23] trained on the FFHQ dataset [22] to generate an d dimensional manifold by only varying the first d latent variables while keeping the remaining fixed.
Dataset Splits No The paper mentions training on a number of images, epochs, and batch sizes, but does not provide specific train/validation/test split percentages or counts in the main text.
Hardware Specification No The paper mentions running experiments on a 'GPU' but does not specify the exact model, manufacturer, or other detailed hardware specifications.
Software Dependencies No The paper mentions using implementations based on other authors' work and licenses (MIT, Apache License 2.0, GPLv3.0) but does not specify software versions (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes For the DNF, we use Gaussian noise with σ2 = 0.01 and λ = 1. ... For that, we first train an DNF on 104 images using 100 epochs with σ2 = 0.1 and λ = 1000. ... We train the models on 2 104 images for 200 epochs.