Implicit Normalizing Flows

Authors: Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we evaluate Imp Flow on several classification and density modeling tasks, and Imp Flow outperforms Res Flow with a comparable amount of parameters on all the benchmarks.
Researcher Affiliation Collaboration Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center Tsinghua-Bosch Joint ML Center, THBI Lab,Tsinghua University, Beijing, 100084 China Center for Data Science, Peking University, Beijing, 100871 China
Pseudocode Yes Algorithm 1: Forward Algorithm For a Single-Block Imp Flow; Algorithm 2: Backward Algorithm For a Single-Block Imp Flow
Open Source Code Yes See https://github.com/thu-ml/implicit-normalizing-flows for details.
Open Datasets Yes We test the effectiveness of Imp Flow on several classification and generative modeling tasks. Imp Flow outperforms Res Flow on all the benchmarks, with comparable model sizes and computational cost... CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009)... tabular datasets: POWER (d = 6), GAS (d = 8), HEPMASS (d = 21), MINIBOONE (d = 43) and BSDS300 (d = 63) from the UCI repository (Dua & Graff, 2017)... 5-bit 64 64 Celeb A (Kingma & Dhariwal, 2018).
Dataset Splits Yes We use the same data preprocessing as Papamakarios et al. (2017), including the train/valid/test datasets splits.
Hardware Specification Yes on a single Tesla P100 (SXM2-16GB)... on a single NVIDIA Ge Force GTX 1080Ti... train each expeirment on a single NVIDIA Ge Force GTX 2080Ti
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number.
Experiment Setup Yes For the Broyden s method, we use ϵf = 10 6 and ϵb = 10 10 for training and testing... batch size of 128, Adam optimizer with learning rate 10 3 and no weight decay, and total epoch of 150.