Sliced Iterative Normalizing Flows
Authors: Biwei Dai, Uros Seljak
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments 5.1. Density Estimation p(x) of Tabular Datasets 5.2. Generative Modeling of Images |
| Researcher Affiliation | Academia | 1Department of Physics, University of California, Berkeley, California, USA 2Lawrence Berkeley National Laboratory, Berkeley, California, USA. |
| Pseudocode | Yes | Algorithm 1 max K-SWD and Algorithm 2 Sliced Iterative Normalizing Flow |
| Open Source Code | No | The paper does not contain an explicit statement about the release of its source code or a link to a code repository. |
| Open Datasets | Yes | We perform density estimation with GIS on four UCI datasets (Lichman et al., 2013) and BSDS300 (Martin et al., 2001), as well as image datasets MNIST (Le Cun et al., 1998) and Fashion-MNIST (Xiao et al., 2017). |
| Dataset Splits | Yes | The data preprocessing of UCI datasets and BSDS300 follows Papamakarios et al. (2017). All the models are trained until the validation log pval stops improving |
| Hardware Specification | Yes | All the models are tested on both a cpu and a K80 gpu, and the faster results are reported here (the results with * are run on gpus.) |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. It mentions 'a cpu and a K80 gpu' and 'python', but no software versions. |
| Experiment Setup | Yes | For GIS we consider two hyperparameter settings: large regularization α (Equation 13) for better log p performance, and small regularization α for faster training. For other NFs we use settings recommended by their original paper, and set the batch size to min(N/10, Nbatch), where Nbatch is the batch size suggested by the original paper. All the models are trained until the validation log pval stops improving, and for KDE the kernel width is chosen to maximize log pval. |