Modeling Temporal Data as Continuous Functions with Stochastic Process Diffusion
Authors: Marin Biloš, Kashif Rasul, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5 we empirically show that our model outperforms the baselines on all tasks. 5. Experiments |
| Researcher Affiliation | Collaboration | 1Technical University of Munich, Germany 2Machine Learning Research, Morgan Stanley, United States. |
| Pseudocode | Yes | In Algorithms 1 and 2 we provide the pseudocode for training the model and sampling new data, for DSPD-GP model. Algorithm 1 Training (DSPD-GP diffusion) Algorithm 2 Sampling (DSPD-GP diffusion) |
| Open Source Code | Yes | https://github.com/morganstanley/MSML/tree/main/ papers/Stochastic_Process_Diffusion |
| Open Datasets | Yes | We test our model as defined in Section 4.1 and Figure 2 against Time Grad (Rasul et al., 2021b) on three established real-world datasets: Electricity, Exchange and Solar (Lai et al., 2018). Table 4. Multivariate dimension, domain, frequency, total training time steps, and prediction length properties of the training datasets used in the forecasting experiments. |
| Dataset Splits | No | The paper uses "training data" and "test set" but does not provide specific split percentages, sample counts, or explicit cross-validation methodology for their experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory specifications, or cloud computing instance types used for the experiments. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies, such as Python, PyTorch, or other libraries, that would be needed for replication. |
| Experiment Setup | No | The paper discusses aspects of model architecture and training process (e.g., Algorithms 1 and 2) but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or a dedicated section detailing the experimental setup with concrete training configurations in the main text. |