Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes

Authors: Yifan Chen, Mark Goldstein, Mengjian Hua, Michael Samuel Albergo, Nicholas Matthew Boffi, Eric Vanden-Eijnden

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We highlight the utility of our approach on several complex, high-dimensional forecasting problems, including stochastically forced Navier Stokes and video prediction on the KTH and CLEVRER datasets. The code is available at https://github.com/interpolants/forecasting.
Researcher Affiliation Academia 1Courant Institute of Mathematical Sciences, New York University, New York, NY, USA.
Pseudocode Yes Algorithm 1 Training Algorithm 2 Sampling Algorithm 3 Video EM: (Euler-Marayuma in Latent Space).
Open Source Code Yes The code is available at https://github.com/interpolants/forecasting.
Open Datasets Yes including stochastically forced Navier Stokes and video prediction on the KTH (Schuldt et al., 2004) and CLEVRER datasets (Yi et al., 2019).
Dataset Splits No We split the data into 90% training data and 10% test data. We use a batch size of 100. In total we train 50 epochs. While a test split is mentioned, a specific validation split percentage or methodology is not provided.
Hardware Specification Yes The model is trained on a single Nvidia A100 GPU and it takes less than 1 day.
Software Dependencies No We use the jax-cfd package (Dresdner et al., 2022) for the mesh generation and domain discretization.
Experiment Setup Yes We train the network using a batch size of 104, default Adam W optimizer with base learning rate l = 10 3 and cosine scheduler that decreases in each epoch the learning rate eventually to 0 after 300 epochs.