Hierarchical Multi-Scale Gaussian Transformer for Stock Movement Prediction
Authors: Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, Jian Guo
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments To evaluate the proposed methods, we use two stock data: one from NASDAQ market and the other from China A-shares market. The details of the two data are listed in Table 1. In the following subsections, we will introduce the data collection process and show our empirical results from numerical experiments. We also conduct an incremental analysis to explore the effectiveness of each proposed enhancements for Transformer. |
| Researcher Affiliation | Collaboration | Qianggang Ding1,2, , Sifan Wu2, , Hao Sun3 , Jiadong Guo1 and Jian Guo1, 1Peng Cheng Laboratory 2Tsinghua University 3The Chinese University of Hong Kong {dqg18, wusf18}@mails.tsinghua.edu.cn, sh018@ie.cuhk.edu.hk, {guojd, guoj}@pcl.ac.cn |
| Pseudocode | No | The paper describes the proposed method using text and a diagram (Figure 1), but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific link to a code repository, nor does it state that the source code for the methodology is openly available or provided in supplementary materials. |
| Open Datasets | No | The paper states, 'We collect the daily quote data of all 3243 stocks from NASDAQ stock market from July 1st, 2010 to July 1st, 2019 and the 15-min quote data of 500 CSI-500 component stocks from China A-shares market from December 1st, 2015 to December 1st, 2019.' It does not provide a direct link or concrete access information for these specific datasets. |
| Dataset Splits | Yes | To avoid the data leakage problem, we strictly follow the sequential order to split training/validation/test sets. For instances, we split the NASDAQ data and the China A-shares data into training/validation/test sets by 8-year/1-year/1-year and 3-year/6-month/6-month, respectively. |
| Hardware Specification | Yes | We implement B-TF, MG-TF and HMG-TF with Py Torch framework on Nvidia Tesla V100 GPU. |
| Software Dependencies | No | The paper states 'We implement B-TF, MG-TF and HMG-TF with Py Torch framework', but it does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer with an initial learning-rate of 1e 4. The size of mini-batch is set to 256. The trade-off hyper-parameter γ is set to 0.05. All TF-based models have 3 multihead self-attention blocks each with 4 heads. |