Scalable Normalizing Flows for Permutation Invariant Densities

Authors: Marin Biloš, Stephan Günnemann

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments we aim to show how having an exact trace model does not limit the expressiveness on the contrary we get the benefits of the faster and more stable training. First, we show that our versions of the models are equivalent to their original implementation, then demonstrate the modeling capacity for the point processes, and finally, show how we can scale to bigger datasets.
Researcher Affiliation Academia Marin Biloš 1 Stephan Günnemann 1 1Technical University of Munich, Germany.
Pseudocode No No pseudocode or algorithm blocks found in the paper.
Open Source Code No The detailed explanation of the data processing, hyperparameter tuning, and additional results can be found in the Supplementary Material.1 https://www.daml.in.tum.de/scalable-nf
Open Datasets Yes Check-ins NY (Cho et al., 2011) is a collection of locations of social network users. [...] Crimes2 dataset contains daily records of locations and types of crimes that occurred in Portland. 2https://nij.ojp.gov/funding/real-time-crime-forecasting-challenge
Dataset Splits Yes Datasets are split into training, validation and test sets (60%-20%-20%).
Hardware Specification Yes We train all of our models on a single GPU (12GB).
Software Dependencies No No specific software dependencies with version numbers are explicitly listed in the paper.
Experiment Setup Yes We train with early stopping, use mini-batches of size 64 and Adam optimizer with the learning rate of 10 3 (Kingma & Ba, 2015).