Kernelised Normalising Flows

Authors: Eshant English, Matthias Kirchler, Christoph Lippert

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We assess the performance of our Ferumal flow kernelisation both on synthetic 2D toy datasets and on five real-world benchmark datasets sourced from Dua & Graff (2017). The benchmark datasets include Power, Gas, Hepmass, Mini Boone, and BSDS300.
Researcher Affiliation Academia Eshant English 1 Matthias Kirchler 1,2 Christoph Lippert1,3 {first}.{last}@hpi.de 1Hasso Plattner Institute for Digital Engineering, Germany 2University of Kaiserslautern-Landau, Germany 3Hasso Plattner Institute for Digital Health at the Icahn School of Medicine at Mount Sinai, NYC, USA
Pseudocode No The paper provides mathematical formulations for its method but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an unambiguous statement or link for the public release of its source code.
Open Datasets Yes We assess the performance of our Ferumal flow kernelisation both on synthetic 2D toy datasets and on five real-world benchmark datasets sourced from Dua & Graff (2017). The benchmark datasets include Power, Gas, Hepmass, Mini Boone, and BSDS300." and "Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive. ics.uci.edu/ml.
Dataset Splits Yes Glow and Real NVP struggled to generalise in low-data regimes, evidenced by increasing validation and test losses whilst the training losses decreased.
Hardware Specification Yes We ran all experiments for Ferumal flows and other baselines on CPUs (Intel Xeon 3.7 GHz).
Software Dependencies No We coded our method in Py Torch (Paszke et al., 2019) and used existing implementations for the other algorithms. We learnt all the kernel hyperparameters using the GPy Torch library for Python for the main experiments." No version numbers for PyTorch or GPy Torch are provided.
Experiment Setup Yes Our Ferumal Flow kernelisation has a negligible number of hyperparameters. Apart from learning rate hyperparameters (i.e., learning rate, β1, β2 for Adam) and the number of layers, that are central to both kernelised and neural-netbased flows, we only need to choose a kernel with its corresponding hyperparameters (and a number of auxiliary points for large-scale experiments)." and Table 5 lists "layers", "kernel", "auxiliary points", "learning rate(lr)", "β1", "β2", "lr schedular", "min lr", "epochs", "batchsize".