Financial Risk Prediction with Multi-Round Q&A Attention Network
Authors: Zhen Ye, Yu Qin, Wei Xu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results prove that our model significantly outperforms the previous state-of-the-art methods and other baselines in three different periods. |
| Researcher Affiliation | Academia | School of Information, Renmin University of China yezhen1997@ruc.edu.cn, qinyu.gemini@gmail.com, weixu@ruc.edu.cn |
| Pseudocode | No | The paper describes algorithms and architectures using descriptive text, mathematical formulas, and diagrams (Figures 1, 2, 3), but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about making its source code available or include any links to a code repository. |
| Open Datasets | No | The data set of earnings call transcripts we used is scraped from Seeking Alpha1. Footnote 1 refers to 'https://seekingalpha.com/'. While the source is a public website, no concrete access information (e.g., a specific link to the scraped dataset, a DOI, or a formal citation with authors/year for the dataset itself) is provided for the dataset used in the experiments. |
| Dataset Splits | Yes | We select data of last year to test the effectiveness of the model. And we choose data in 2015 and 2016 to train model and data in 2017 as the validation set. |
| Hardware Specification | Yes | We train our model on two GPUs separately, Titan V with 12G memory and Tesla V100 with 32G memory. |
| Software Dependencies | No | Our neural network is constructed with the Pytorch3 architecture. ... we employ it with torchtext module4. The paper mentions software names like Pytorch and torchtext but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | The learning rate is in the set of {10 3, 10 4, 10 5} and the batch size is set to 16 or 32. ... And we choose Adam optimizer to optimize our model step by step. |