Robust and Scalable SDE Learning: A Functional Perspective

Authors: Scott Alexander Cameron, Tyron Luke Cameron, Arnu Pretorius, Stephen J. Roberts

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide some experiments to illustrate and verify the performance and capabilities of our proposed algorithm.
Researcher Affiliation Collaboration Scott Cameron Oxford University, Insta Deep Ltd. United Kingdom s.cameron@instadeep.com Tyron Cameron Discovery Insure South Africa Arnu Pretorius Insta Deep Ltd. South Africa Stephen Roberts Oxford University United Kingdom
Pseudocode Yes Algorithm 1 Path Integral Importance Sampling; Algorithm 2 Transformed-State Path Integral Importance Sampling
Open Source Code Yes The code included in the supplementary material includes a Dockerfile and instructions for running some of the experiments.
Open Datasets Yes The data set we used was the Hungarian chickenpox cases dataset from the UCI machine learning repository; it has 20 features per observation. This dataset can be found at https://archive.ics.uci.edu/ml/datasets/Hungarian+Chickenpox+Cases
Dataset Splits No The paper uses '16 independent paths' and 'observations' for training and evaluation metrics, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or counts for distinct sets).
Hardware Specification Yes These models were trained on an Nvidia RTX 3070, with 8Gb of memory.
Software Dependencies No The paper mentions using the 'Adam optimizer' and 'neural network' architectures, but does not provide specific version numbers for software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages.
Experiment Setup Yes For each model we used the Adam optimizer with a learning rate of 10-3 and ran the optimization algorithm for 104 iterations. In both cases we used K = 64 and a time step size of t = 10-2.