Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
InvariantStock: Learning Invariant Features for Mastering the Shifting Market
Authors: Haiyao Cao, Jinan Zou, Yuhang Liu, Zhen Zhang, Ehsan Abbasnejad, Anton van den Hengel, Javen Qinfeng Shi
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results demonstrate that the proposed Invariant Stock not only delivers robust and accurate predictions but also outperforms existing baseline methods in both prediction tasks and backtesting within the dynamically changing markets of China and the United States. |
| Researcher Affiliation | Academia | Haiyao Cao EMAIL Australia Institute for Machine Learning The University of Adelaide Jinan Zou EMAIL Australia Institute for Machine Learning The University of Adelaide Yuhang Liu EMAIL Australia Institute for Machine Learning The University of Adelaide Zhen Zhang EMAIL Australia Institute for Machine Learning The University of Adelaide Ehsan Abbasnejad EMAIL Australia Institute for Machine Learning The University of Adelaide Anton Van Den Hengel EMAIL Australia Institute for Machine Learning The University of Adelaide Javen Qinfeng Shi EMAIL Australia Institute for Machine Learning The University of Adelaide |
| Pseudocode | Yes | Algorithm 1 Training Process |
| Open Source Code | Yes | Our code is available at https://github.com/Haiyao-Nero/Invariant Stock |
| Open Datasets | No | We conducted comprehensive assessments on both the China and the US stock markets, spanning more than 20 years. The details of these datasets are summarized in Table 1. we collect an extensive range of stock data, striving to accurately represent the real market conditions pertinent to portfolio selection. This extensive data collection was also aimed at minimizing potential biases in the dataset, thereby ensuring a more reliable and accurate assessment. |
| Dataset Splits | Yes | China Train 05/1995-12/2016 2662 Validation 01/2017-12/2019 3440 Test 01/2020-10/2022 4048 US Train 01/1990-12/2018 4120 Validation 01/2019-12/2020 4831 Test 01/2021-01/2024 6314 |
| Hardware Specification | Yes | The experimental setup for our study was conducted using a GTX3090 GPU. |
| Software Dependencies | No | Adam with a learning rate of 0.0005 is used as the optimizer and it s scheduled by One cycle scheduler. |
| Experiment Setup | Yes | The look-back window length is set as 20. Adam with a learning rate of 0.0005 is used as the optimizer and it s scheduled by One cycle scheduler. both α and β and θ are set to 1. There s no further tuning for these parameters. |