Learning Fractional White Noises in Neural Stochastic Differential Equations

Authors: Anh Tong, Thanh Nguyen-Tang, Toan Tran, Jaesik Choi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental Results This section will first justify how close the samples generated by our method are to true ones. We then investigate the ability to learn the Hurst function for synthetic data and test our model for real-world data. Finally, we examine the role of Hurst exponents in score-based generative models.
Researcher Affiliation Collaboration Anh Tong KAIST anhtong@kaist.ac.kr Thanh Nguyen-Tang Johns Hopkins University nguyent@cs.jhu.edu Toan Tran Vin AI Research, Vietnam v.toantm3@vinai.io Jaesik Choi KAIST, INEEJI jaesik.choi@kaist.ac.kr
Pseudocode Yes We can elucidate the difference via Algorithm 1 and Algorithm 2 in Appendix.
Open Source Code Yes Our source code is available at this repository.
Open Datasets Yes We only consider the task of generating MNIST images [53].
Dataset Splits No The data is retrieved from [85]. It is then standardized and split into the first 80% time points to train and the remaining 20% to test.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No All experiments are implemented using Py Torch [68] with the library torchsde [54, 47].
Experiment Setup No The paper mentions data splitting ("80% time points to train and the remaining 20% to test") and the number of data points/sample paths for synthetic data ("100 data points for each sample path and obtain up to 5 sample paths"). However, it does not provide specific hyperparameter values such as learning rates, batch sizes, epochs, or optimizer settings.