Statistical Spatially Inhomogeneous Diffusion Inference

Authors: Yinuo Ren, Yiping Lu, Lexing Ying, Grant M. Rotskoff

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical results are bolstered by numerical experiments demonstrating accurate inference of spatially-inhomogeneous diffusion tensors. In this section, we present numerical results on a two-dimensional example, to illustrate the accordance between our theoretical convergence rates and those of our proposed neural diffusion estimator.
Researcher Affiliation Academia Yinuo Ren1, Yiping Lu2, Lexing Ying1,3, Grant M. Rotskoff1,4 1Insitute for Computational and Mathematical Engineering (ICME), Stanford University 2Courant Institute of Mathematical Sciences, New York University 3Department of Mathematics, Stanford University 4Department of Chemistry, Stanford University
Pseudocode Yes Algorithm 1 Diffusion inference within function class G 1: Find the drift estimator ˆb := arg min b2Gd Lb N( b; (xk )N k=0); 2: Find the diffusion estimator ˆD := arg min D2Gd d LD N( D; (xk )N k=0, ˆb),
Open Source Code No The paper does not contain any explicit statement about providing open-source code or a link to a code repository for the methodology described.
Open Datasets No The paper states: "We first generate data using the Euler-Maruyama method with a time step ∆0 = 2 × 10−5 up to T0 = 10^4, and then sub-sample data at varying time steps and time horizons T for each experiment instance from this common trajectory." It does not provide concrete access information (link, DOI, formal citation) for a publicly available or open dataset.
Dataset Splits No The paper describes generating test data but does not provide specific details about training, validation, and test dataset splits (e.g., percentages, sample counts, or references to predefined splits) that would be needed for reproduction.
Hardware Specification Yes The training process is executed on one Tesla V100 GPU.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, specific libraries, frameworks).
Experiment Setup Yes We use a Res Net as our neural network structure with two residual blocks, each containing a fully-connected layer with a hidden dimension of 1000. The final training loss is thus LN(ˆg)+λ ˆLper(ˆg), where λ is a hyperparameter and ˆg can be either ˆb or ˆD. We first generate data using the Euler-Maruyama method with a time step ∆0 = 2 × 10−5 up to T0 = 10^4, and then sub-sample data at varying time steps and time horizons T for each experiment instance from this common trajectory.