Neural Stochastic Differential Games for Time-series Analysis

Authors: Sungwoo Park, Byoungwoo Park, Moontae Lee, Changhee Lee

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Throughout the experiments on various datasets, we demonstrate the superiority of our framework over all the tested benchmarks in modeling time-series prediction by capitalizing on the advantages of applying cooperative games.
Researcher Affiliation Collaboration 1LG AI Research 2Artificial Intelligence Graduate School, Chung-Ang University 3Department of Information and Decision Sciences, University of Illinois Chicago. Correspondence to: Sungwoo Park <sungwoopark.lg@lgresearch.ai>.
Pseudocode Yes Algorithm 1 Deep Neural Fictitious Play
Open Source Code Yes Our code is available at https://github.com/LGAI-AML/Ma SDEs.
Open Datasets Yes We evaluated the time-series prediction performance of ours and the benchmarks on multiple real-world datasets: BAQD (Zhang et al., 2017), Speech (Warden, 2018), and Physionet (Silva et al., 2012).
Dataset Splits No We split each time-series in the interval [0, T] into two sub-intervals: the first 80% as the observation interval, i.e., O = [0, 0.8T], and the remaining 20% as the prediction interval, i.e., T = [0.8T, T]. We split time-series samples into two halves as training/evaluation sets in Physionet. For BAQD and Speech datasets, we divided time-series samples into 80/20 training/testing splits for training and evaluation, respectively.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running experiments were provided.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names or solver names with specific versions) were mentioned.
Experiment Setup Yes In all experiments with real-world datasets, we train each model for 500 epochs using the Adam optimizer with a learning rate of 10^-3 and batch size of 128.