Riemannian Score-Based Generative Modelling
Authors: Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our approach on a variety of manifolds, and in particular with earth and climate science spherical data. ... In this section we benchmark the empirical performance of RSGMs along with other manifold-valued methods introduced in Sec. 5. ... We start by evaluating RSGMs on a collection of simple datasets, each containing an empirical distribution of occurrences of earth and climate science events on the surface of the earth. ... In Fig. 3, we observe that RSGMs are able to fit well the target distribution even in high dimension... From Table 5 we observe that, RSGMs perform consistently... |
| Researcher Affiliation | Academia | Valentin De Bortoli , Émile Mathieu , Michael Hutchinson , James Thornton , Yee Whye Teh , Arnaud Doucet equal contribution. Dept. of Computer Science ENS, CNRS, PSL University Paris, France. Dept. of Statistics, University of Oxford, Oxford, UK. 36th Conference on Neural Information Processing Systems (Neur IPS 2022). |
| Pseudocode | Yes | Algorithm 1 GRW (Geodesic Random Walk) ... Algorithm 2 RSGM (Riemannian Score-Based Generative Model) |
| Open Source Code | Yes | Code is available at https://github.com/vdebor/Riemannian-SGM |
| Open Datasets | Yes | earthquakes (NGDC/WDS), floods (Brakenridge, 2017) and wild fires (EOSDIS, 2020). |
| Dataset Splits | Yes | We use a 80-20% train-test split for the earth and climate science datasets and the SO3(R) dataset, and 5-fold cross-validation for the torus dataset. |
| Hardware Specification | Yes | All experiments were run on a single NVIDIA A100 GPU. |
| Software Dependencies | No | Our implementation is built on Jax (Bradbury et al., 2018) and Geomstats (Miolane et al., 2020a,b). |
| Experiment Setup | Yes | The score network is a 3-layer MLP of 128 hidden units with a Swish activation. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-3 and a batch size of 128. We train for 5000 epochs. |