Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns
Authors: Hongbin Huang, Minghua Chen, Xiao Qiao
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that FTS-Diffusion generates synthetic financial time series highly resembling observed data, outperforming state-of-the-art alternatives. Two downstream experiments demonstrate that augmenting real-world data with synthetic data generated by FTS-Diffusion reduces the error of stock market prediction by up to 17.9%. |
| Researcher Affiliation | Academia | Hongbin Huang, Minghua Chen , and Xiao Qiao School of Data Science, City University of Hong Kong hongbin.huang@my.cityu.edu.hk,{minghua.chen,xiaoqiao}@cityu.edu.hk |
| Pseudocode | Yes | The pseudo-code of SISC and the selection of parameters, such as the range of segment lengths and the number of clusters K, are detailed in Appendix B.1. We provide the pseudo-code of the sampling process in our FTS-Diffusion as Algorithm 2 in Appendix D.2. |
| Open Source Code | Yes | Codes are available in supplementary materials. |
| Open Datasets | Yes | We run experiments on three different types of financial assets with varying characteristics: the Standard and Poor s 500 index (S&P 500), the stock price of Google (GOOG), and the corn futures traded on the Chicago Board of Trade (ZC=F). Detailed data settings are given in Appendix E.1. |
| Dataset Splits | No | The paper specifies an '80/20 train-test split strategy' and mentions extending evaluation to '70/30 and 60/40 train/test split strategies' but does not explicitly mention a 'validation' split with percentages or sample counts. |
| Hardware Specification | No | The paper includes a 'Complexity and Runtime Analysis' section (Table 5) which lists training and inference runtimes, but it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instances) used for these experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer', 'residual temporal convolutional (TCN) blocks', 'LSTMs or GRUs' but does not provide specific version numbers for any programming languages, libraries, or frameworks used (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Our pattern-conditioned diffusion network utilizes six residual temporal convolutional (TCN) blocks... We set the number of diffusion steps to N = 100... We jointly train these two networks following the procedure in Sec. 4.2 using the Adam optimizer with a learning rate of 5e 04. We set the batch size to 32... We train this network using the Adam optimizer with a learning rate of 4e 04 over 1000 epochs. |