Canonical normalizing flows for manifold learning
Authors: Kyriakos Flouris, Ender Konukoglu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments We compare our method with the rectangular normalizing flow (RNF) and the original Brehmer and Cranmer manifold learning flow (MFlow). The test FID scores of the trained models are shown in Table 1. |
| Researcher Affiliation | Academia | Kyriakos Flouris Department of Information Technology and Electrical Engineering ETH Zürich kflouris@vision.ee.ethz.ch Ender Konukoglu Department of Information Technology and Electrical Engineering ETH Zürich kender@vision.ee.ethz.ch |
| Pseudocode | No | The paper provides mathematical derivations and descriptions of the method, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code is available at https://github.com/k-flouris/cmf/. |
| Open Datasets | Yes | First, Mflow, RNF and CMF are also compared using image datasets MNIST, Fashion-MNIST and Omniglot R784. The standard normalizing flow benchmarks of tabular datasets from [3] are used to compare the RNF, MFlow and CMF methods. |
| Dataset Splits | No | The paper mentions 'test FID scores' and 'validation FID-like scores' but does not specify the explicit percentages, counts, or methodology for the train/validation/test dataset splits, nor does it refer to a predefined standard split with specific details. |
| Hardware Specification | Yes | The training was performed on a GPU cluster with various GPU nodes including Nvidia GTX 1080, Nvidia GTX 1080 Ti, Nvidia Tesla V100, Nvidia RTX 2080, Nvidia Titan RTX, Nvidia Quadro RTX 6000, Nvidia RTX 3090. |
| Software Dependencies | No | The paper mentions 'Optimizer Adams' and 'Real NVP' but does not provide specific version numbers for these or any other software dependencies such as Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Learning rate 1 10 4 Optimizer Adams Likelihood annealing between epochs 25 and 50 Early stopping MNIST, Fashion MNIST, Omniglot: min 121 epochs SVHN, Celeb A, CIFAR10: min 201 epochs Batch size 120 Latent dimensions {5, 10, 20, 30, 40} Hyperparameters Equation (8): β = 5, γ = 0.1, 0.01 Calculation method Jacobian-transpose-Jacobian: Hutchinson, Gaussian, K=1 CG tolerance 0.001 D-dim flow coupler 8x64 d-dim flow layers 10 |