Automatic De-Biased Temporal-Relational Modeling for Stock Investment Recommendation

Authors: Weijun Chen, Shun Li, Xipu Yu, Heyuan Wang, Wei Chen, Tengjiao Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three datasets from distinct stock markets show that ADB-TRM excels state-of-the-arts over 28.41% and 9.53% in terms of cumulative and risk-adjusted returns.
Researcher Affiliation Academia 1Key Lab of High Confidence Software Technologies (MOE), School of Computer Science, Peking University, Beijing, China 2Research Center for Computational Social Science, Peking University 3Institute of Computational Social Science, Peking University (Qingdao) 4University of International Relations 5New York University
Pseudocode No The paper describes the proposed framework using equations and textual explanations, but it does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper links to an appendix (https://oncecwj.github.io/ADB-TRM-Appendix/Appendix.pdf) but does not contain an explicit statement that the source code for the methodology is provided, nor does the appendix itself contain the code.
Open Datasets Yes The first dataset [Feng et al., 2019b] consists of 1,026 stock shares from the relatively volatile US S&P 500 and NASDAQ Composite Indexes. The second dataset [Feng et al., 2019b] encompasses 1,737 stocks listed on the NYSE... The third dataset [Li et al., 2021] is centered around the widely recognized TOPIX100 Index...
Dataset Splits Yes Table 2: Dataset statistics. Datasets | Stocks | Train Days | Valid Days | Test Days ---|---|---|---|--- NASDAQ | 1026 | 01/13-12/15 (756) | 01/16-12/16 (252) | 01/17-12/17 (237)
Hardware Specification Yes We tune the model and ablation variants on one Nvidia Ge Force RTX 3090 GPU
Software Dependencies No The paper mentions PyTorch but does not specify its version number.
Experiment Setup Yes For the proposed framework, the period P and dimension F are searched within {10, 20, 30, 40, 50} and finally set to 20 and 10, respectively. In temporal-relational fusion, we set the RNN hidden units Hu to 96. The dilation depth Dep and stacked layers Ls in Wave Net are both set to 2. The hyperparameters λ [1, 10], β, γ [0.5, 5], and b {1e2, 5e2, 1e3, 1.5e3, 2e3} are finally set to 4, 1.2, 1.2, and 1e3, respectively. The initial temperature τo {10 2, 10 1, 1, 10} and δ {10, 102, 103, 104} are set to 1 and 103. The hyperparameters U1 {20, 25, 30, 35, 40}, U2 {83%, 87%, 91%, 95%, 99%}, and U3 {63%, 67%, 71%, 75%, 79%} are finally set to 25, 95%, and 67%, respectively. ...the learning rate is set to 0.001, and the batch size is set to 20.