Efficient Sampling of Stochastic Differential Equations with Positive Semi-Definite Models
Authors: Anant Raj, Umut Simsekli, Alessandro Rudi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper deals with the problem of efficient sampling from a stochastic differential equation, given the drift function and the diffusion matrix. The proposed approach leverages a recent model for probabilities [RC21] (the positive semi-definite PSD model) from which it is possible to obtain independent and identically distributed (i.i.d.) samples at precision ε with a cost that is m2d log(1/ε) where m is the dimension of the model, d the dimension of the space. The proposed approach consists of: first, computing the PSD model that satisfies the Fokker-Planck equation (or its fractional variant) associated with the SDE, up to error ε, and then sampling from the resulting PSD model. Assuming some regularity of the Fokker Planck solution (i.e. β-times differentiability plus some geometric condition on its zeros) We obtain an algorithm that: (a) in the preparatory phase obtains a PSD model with L2 distance ε from the solution of the equation, with a model of dimension m = ε (d+1)/(β 2s)(log(1/ε))d+1 where 1/2 s 1 is the fractional power to the Laplacian, and total computational complexity of O(m3.5 log(1/ε)) and then (b) for Fokker-Planck equation, it is able to produce i.i.d. samples with error ε in Wasserstein-1 distance, with a cost that is O(dε 2(d+1)/β 2 log(1/ε)2d+3) per sample. This means that, if the probability associated with the SDE is somewhat regular, i.e. β 4d + 2, then the algorithm requires O(ε 0.88 log(1/ε)4.5d) in the preparatory phase, and O(ε 1/2 log(1/ε)2d+2) for each sample. Our results suggest that as the true solution gets smoother, we can circumvent the curse of dimensionality without requiring any sort of convexity. |
| Researcher Affiliation | Academia | Anant Raj Coordinated Science Laboraotry University of Illinois Urbana-Champaign. Inria, Ecole Normale Supérieure PSL Research University, Paris, France. anant.raj@inria.fr Umut Sim sekli Inria, CNRS, Ecole Normale Supérieure PSL Research University, Paris, France. umut.simsekli@inria.fr Alessandro Rudi Inria, Ecole Normale Supérieure PSL Research University, Paris, France. alessandro.rudi@inria.fr |
| Pseudocode | No | The paper describes the proposed approach and sampling method in narrative text (e.g., "The proposed approach consists of: first, computing...", "Efficient sampling method Let... Step 1: Find... Step 2: Denote...") but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks with structured steps. |
| Open Source Code | No | The paper does not contain any statement about making source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and presents mathematical analysis and proofs without conducting experiments on specific datasets. Thus, it does not provide concrete access information for a dataset. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not specify any training/validation/test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not report on computational experiments that would require specific hardware specifications. There is no mention of GPUs, CPUs, or other computing infrastructure used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not report on computational experiments that would require specific software dependencies with version numbers. No programming languages, libraries, or solvers are mentioned with specific versions. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters, training configurations, or other system-level settings. Its focus is on mathematical proofs and theoretical guarantees. |