Sum-of-Squares Polynomial Flow
Authors: Priyank Jaini, Kira A. Selby, Yaoliang Yu
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform several synthetic experiments on various density geometries to demonstrate the benefits (and shortcomings) of such transformations. SOS flows achieve competitive results in simulations and several real-world datasets. We report our empirical analysis. We performed holistic synthetic experiments to gain intuitive understanding of triangular maps and SOS flows in particular. Additionally, we compare SOS flows to previous neural density estimation methods on real-world datasets where it achieved competitive performance. |
| Researcher Affiliation | Academia | 1University of Waterloo, Waterloo, Canada 2Waterloo AI Institute, Waterloo, Canada 3Vector Institute, Toronto, Canada. |
| Pseudocode | No | The paper includes schematic figures (Figure 1 and Figure 2) but does not provide any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present any structured code-like procedures. |
| Open Source Code | No | The paper states: 'We would also like to thank Ilya Kostrikov for the code which we adapted for our SOS Flow implementation.' This indicates adaptation of existing code, but there is no explicit statement or link confirming that the authors' own code for the described methodology is publicly released. |
| Open Datasets | Yes | We also performed density estimation experiments on 5 real world datasets that include four datasets from the UCI repository and BSDS300. |
| Dataset Splits | Yes | We report the average log-likelihood obtained using 10 fold cross-validation on held-out test sets for SOS flows. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments, such as GPU/CPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the implementation or experimentation. |
| Experiment Setup | Yes | The SOS transformation was trained using maximum likelihood method with source density as standard normal distribution. We used stochastic gradient descent to train our models with a batch size of 1000, learning rate = 0.001, number of stacked blocks = 8, number of polynomials (k) = 5 and, degree of polynomials (r) = 4 with number of epochs for training = 40. |