Relation-Aware Transformer for Portfolio Policy Learning
Authors: Ke Xu, Yifan Zhang, Deheng Ye, Peilin Zhao, Mingkui Tan
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world crypto-currency and stock datasets verify the state-of-the-art performance of RAT. |
| Researcher Affiliation | Collaboration | Ke Xu1,2 , Yifan Zhang1,2 , Deheng Ye3 , Peilin Zhao3 , Mingkui Tan1 1South China University of Technology, Guangzhou, China 2Pazhou Lab, Guangzhou, China 3Tencent AI Lab, Shenzhen, China |
| Pseudocode | No | The paper describes the architecture and algorithms in prose and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The source code is available: https://github.com/Ivsxk/RAT. |
| Open Datasets | Yes | All crypto-currency datasets are accessed with Poloniex2, where data selection is based on the method in [Jiang et al., 2017]. We also evaluate our methods on the S&P500 stock dataset obtained from Kaggle3. |
| Dataset Splits | No | The paper provides statistics for training and test datasets in Table 1, but it does not explicitly mention or describe a separate validation dataset split with specific percentages or sample counts. |
| Hardware Specification | Yes | In the training process, we adopt Adam optimizer on a single NVIDIA Tesla P40 GPU. |
| Software Dependencies | No | The paper states that "RAT is implemented via pytorch" but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The number of attention heads is set to H=2, and the dimension of the feature space is set to df=12. In the training process, we adopt Adam optimizer on a single NVIDIA Tesla P40 GPU. The training step is 80000 for crypto-currency data and 20000 for stock data, where the batch size is 128. We set learning rate to 10 4 and weight decay of l2 regularizer to 10 7. The transaction cost rate is 0.25%. The temporal length of the local context is set to l=5, while the length of the price series is k=30. For all RL based methods, results are averaged over 5 runs with random initialization seeds. |