Squared Neural Families: A New Class of Tractable Density Models

Authors: Russell Tsuchida, Cheng Soon Ong, Dino Sejdinovic

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments Implementation All experiments are conducted on a Dual Xeon 14-core E5-2690 with 30GB of reserved RAM and a single NVidia Tesla P100 GPU. Full experimental details are given in Appendix E. We build our implementation on top of normflows [45], a Py Torch package for normalising flows. SNEFYs are built as a Base Distribution, which are base distributions inside a Normalizing Flow with greater than or equal to zero layers. We train all models via maximum likelihood estimation (MLE) i.e. minimising forward KL divergence. 2D synthetic unconditional density estimation We consider the 2 dimensional problems also benchmarked in [46]. We compare the test performance, computation time and parameter count of non-volume preserving flows (NVPs) [7] with four types of base distribution: SNEFY, resampled [46] (Resampled), diagonal Gaussian (Gauss) and Gaussian mixture model (GMM).
Researcher Affiliation Academia Russell Tsuchida Data61-CSIRO Cheng Soon Ong Data61-CSIRO & Australian National University Dino Sejdinovic School of CMS & AIML, The University of Adelaide
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Software available at https://github.com/Russell Tsuchida/snefy.
Open Datasets Yes We use the dataset [44] also used by [19] for point processes. This dataset, retrieved in 2015, is called the Revised New General Catalogue and Index Catalogue (RNGC/IC). ... We estimate distributions for cosmological redshift x1 R conditional on features x2 R5 using a public dataset [2].
Dataset Splits No Secondly, rather than computing test performance at the last epoch of training, we follow the more standard procedure of returning the test performance evaluated at the epoch corresponding with the smallest validation performance. This validation/test monitoring results in substantial performance gains in all of the models, with no extra computational cost for SNEFY, Gauss and GMM.
Hardware Specification Yes All experiments are conducted on a Dual Xeon 14-core E5-2690 with 30GB of reserved RAM and a single NVidia Tesla P100 GPU.
Software Dependencies No We build our implementation on top of normflows [45], a Py Torch package for normalising flows.
Experiment Setup Yes We train each models for a maximum of 20000 iterations (while monitoring validation performance). We use Adam with default hyperparameters and weight decay 10 3. The batch size is 210.