Normalizing Flows on Tori and Spheres
Authors: Danilo Jimenez Rezende, George Papamakarios, Sebastien Racaniere, Michael Albergo, Gurtej Kanwar, Phiala Shanahan, Kyle Cranmer
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate and compare our proposed flows based on their ability to learn sharp, correlated and multi-modal target densities. We used targets p(x) = e−βu(x)/Z with inverse temperature β and normalizer Z. We varied β to create targets with different degrees of concentration/sharpness. The models qη were trained by minimizing the KL divergence |
| Researcher Affiliation | Collaboration | 1Deep Mind, London, U.K. 2New York University, New York, U.S.A. 3Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. |
| Pseudocode | No | The paper does not contain pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code related to the described methodology. |
| Open Datasets | No | The paper describes using 'synthetic inference problems' and defines its own 'target densities' which are not explicitly stated to be publicly available datasets nor is there any link, DOI, or citation to such. |
| Dataset Splits | No | The paper describes training and evaluation on synthetic target densities but does not specify any explicit train/validation/test dataset splits, as the data is generated based on defined distributions rather than pre-existing datasets. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions 'Adam (Kingma & Ba, 2015)' as an optimizer and 'ReLU-MLP' but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | All models were optimized using Adam (Kingma & Ba, 2015) with learning rate 2 × 10−4, 20,000 iterations, and batch size 256. |