Enhancing Stock Movement Prediction with Adversarial Training
Authors: Fuli Feng, Huimin Chen, Xiangnan He, Ji Ding, Maosong Sun, Tat-Seng Chua
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world stock data show that our method outperforms the state-of-the-art solution [Xu and Cohen, 2018] with 3.11% relative improvements on average w.r.t. accuracy, validating the usefulness of adversarial training for stock prediction task. |
| Researcher Affiliation | Academia | Fuli Feng1 , Huimin Chen2 , Xiangnan He3 , Ji Ding4 , Maosong Sun2 and Tat-Seng Chua1 1National University of Singapore 2Tsinghua Unversity 3University of Science and Technology of China 4University of Illinois at Urbana-Champaign |
| Pseudocode | No | The paper describes the model architecture and training process in detail and provides figures illustrating the model components (Figure 2, 3), but it does not contain any formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | Code could be accessed through https://github.com/hennande/Adv-ALSTM. |
| Open Datasets | Yes | We evaluate the proposed method on two benchmarks on stock movement prediction, ACL18 [Xu and Cohen, 2018] and KDD17 [Zhang et al., 2017]. |
| Dataset Splits | Yes | We temporally split the identified examples into training (Jan-01-2014 to Aug-01-2015), validation (Aug-01-2015 to Oct-01-2015), and testing (Oct-01-2015 to Jan-01-2016). ... We then temporally split the examples into training (Jan-01-2007 to Jan-01-2015), validation (Jan-01-2015 to Jan-01-2016) and testing (Jan-012016 to Jan-01-2017). |
| Hardware Specification | No | The paper states that Adv-ALSTM is implemented with Tensorflow, but it does not provide any specific details about the hardware (e.g., GPU models, CPU types, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper states 'We implement the Adv-ALSTM with Tensorflow and optimize it using the mini-batch Adam', but it does not specify version numbers for these software dependencies. |
| Experiment Setup | Yes | We implement the Adv-ALSTM with Tensorflow and optimize it using the mini-batch Adam[Diederik and Jimmy, 2015] with a batch size of 1,024 and an initial learning rate of 0.01. ... We further tune β and ϵ within [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1] and [0.001, 0.005, 0.01, 0.05, 0.1], respectively. |