Invertible Monotone Operators for Normalizing Flows
Authors: Byeongkeun Ahn, Chiyoon Kim, Youngjoon Hong, Hyunwoo J. Kim
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Monotone Flows consistently outperform comparable baseline normalizing flows on multiple image density estimation benchmarks as well as on 2D toy datasets. In addition, ablation studies demonstrate the effectiveness of the proposed methods. |
| Researcher Affiliation | Academia | Byeongkeun Ahn1, Chiyoon Kim1, Youngjoon Hong2 , Hyunwoo J. Kim1 Korea University1, Sungkyunkwan University2 {byeongkeunahn, kimchiyoon, hyunwoojkim}@korea.ac.kr hongyj@skku.edu |
| Pseudocode | No | The paper describes its methods but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/mlvlab/Monotone Flows. |
| Open Datasets | Yes | We evaluate our method on MNIST [43], CIFAR-10 [44], and the downscaled version of Image Net in 32 x 32 and 64 x 64 [45]. |
| Dataset Splits | Yes | We evaluate our method on MNIST [43], CIFAR-10 [44], and the downscaled version of Image Net in 32 x 32 and 64 x 64 [45]. |
| Hardware Specification | Yes | All models were trained on 8 NVIDIA GeForce RTX 3090s with 24GB VRAM or 4 NVIDIA A100s with 80GB VRAM. |
| Software Dependencies | No | The paper mentions using Adam optimizer and provides a GitHub link to its code, but it does not specify other software dependencies like libraries or frameworks with version numbers (e.g., PyTorch, TensorFlow, specific Python libraries). |
| Experiment Setup | Yes | We use Adam optimizer with learning rate 1e-3, and we train the models for 50k iterations. The batch size is 500. We train for 100, 1,000, 20, 20 epochs with batch sizes 64, 64, 256, 256 with learning rates 0.001, 0.001, 0.004, 0.004 for MNIST, CIFAR-10, Image Net32, Image Net64, respectively. |