Deep Learning for Event-Driven Stock Prediction
Authors: Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our model can achieve nearly 6% improvements on S&P 500 index prediction and individual stock prediction, respectively, compared to state-of-the-art baseline methods. |
| Researcher Affiliation | Academia | Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {xding, tliu, jwduan}@ir.hit.edu.cn Singapore University of Technology and Design yue zhang@sutd.edu.sg |
| Pseudocode | Yes | Algorithm 1: Event Embedding Training Process |
| Open Source Code | No | The paper does not explicitly provide a link or statement about the availability of its source code. Footnote 1 refers to the dataset released by Ding et al. [2014], not the code for this paper's methodology. |
| Open Datasets | Yes | We use financial news from Reuters and Bloomberg over the period from October 2006 to November 2013, released by Ding et al. [2014]1. (Footnote 1: http://ir.hit.edu.cn/~xding/index_english.htm) |
| Dataset Splits | Yes | Detail statistics of training, development (tuning) and test sets are shown in Table 1. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions tools like 'Re Verb' and 'ZPar' and algorithms like 'skip-gram' but does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The standard L2 regularization weight λ is set as 0.0001. The iteration number N is set to 500. The input of neural tensor network is word embeddings and the output is event embeddings. We learn the initial word representation of d-dimensions (d = 100) from large-scale financial news corpus, using the skip-gram algorithm [Mikolov et al., 2013]. We use a feedforward neural network with one hidden layer and one output layer. |