Multimeasurement Generative Models

Authors: Saeed Saremi, Rupesh Kumar Srivastava

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study permutation invariant Gaussian M-densities on MNIST, CIFAR-10, and FFHQ-256 datasets, and demonstrate the effectiveness of this framework for realizing fast-mixing stable Markov chains in high dimensions. We present our experiments on MNIST, CIFAR-10, and FFHQ-256 datasets which were focused on permutation invariant M-densities.
Researcher Affiliation Collaboration Saeed Saremi1, 2& Rupesh Kumar Srivastava1 1NNAISENSE Inc. 2Redwood Center, UC Berkeley
Pseudocode Yes Algorithm 1: Walk-jump sampling (Saremi & Hyv arinen, 2019) using the discretization of Langevin diffusion by Sachs et al. (2017). Algorithm 2: Walk-Jump Sampling (WJS) using the discretization of Langevin diffusion from Lemma 1.
Open Source Code Yes Our code is publicly available at https://github.com/nnaisense/mems.
Open Datasets Yes Datasets Our experiments were conducted on the MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky, 2009) and FFHQ-256 (Karras et al., 2019; pre-processed version by Child, 2020) datasets.
Dataset Splits No The paper mentions using standard datasets (MNIST, CIFAR-10, FFHQ-256) and refers to 'training sets', but does not provide specific details on the train/validation/test splits (e.g., percentages, sample counts, or explicit references to predefined splits with full citations) needed for reproduction.
Hardware Specification Yes Table 2: Main hyperparameters and computational resources used for training MDAE models. ... GPUs 1 GTX Titan X 4 GTX Titan X 4 V100
Software Dependencies Yes All models were implemented in the CPython (v3.8) library Py Torch v1.9.0 (Paszke et al., 2017) using the Py Torch Lightning framework v1.4.6 (Falcon et al.).
Experiment Setup Yes All important details of the experimental setup are provided in Appendix E. Table 2 lists the main hyperparameters used and hardware requirements for training MDAE models for each dataset.