StockFormer: Learning Hybrid Trading Machines with Predictive Coding
Authors: Siyu Gao, Yunbo Wang, Xiaokang Yang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Stock Former significantly outperforms existing approaches across three publicly available financial datasets in terms of portfolio returns and Sharpe ratios. |
| Researcher Affiliation | Academia | Siyu Gao , Yunbo Wang and Xiaokang Yang Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University {siyu.gao, yunbow, xkyang}@sjtu.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is publicly available. |
| Open Datasets | No | The paper states 'three publicly available financial datasets' and names them (CSI-300, NASDAQ-100, Cryptocurrency), mentioning data collection from 'Yahoo Finance'. However, it does not provide specific links, DOIs, repositories, or formal citations for these processed datasets to ensure reproducibility of the exact data used. |
| Dataset Splits | No | The paper describes training and test splits for all datasets (e.g., '1,935 days and 728 trading days respectively' for CSI-300), but it does not explicitly mention or detail a separate validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Fin RL platform [Liu et al., 2021]' and algorithms like 'Soft Actor-Critic (SAC) [Haarnoja et al., 2018]' and 'DDPG [Lillicrap et al., 2016]', but it does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |