ECHO-GL: Earnings Calls-Driven Heterogeneous Graph Learning for Stock Movement Prediction

Authors: Mengpu Liu, Mengying Zhu, Xiuyuan Wang, Guofang Ma, Jianwei Yin, Xiaolin Zheng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two financial datasets demonstrate the effectiveness of ECHO-GL on stock price movement prediction tasks together with high prediction accuracy and trading profitability.
Researcher Affiliation Academia 1 School of Software Technology, Zhejiang University, China 2 College of Computer Science, Zhejiang University, China 3 School of Computer Science and Technology, Zhejiang Gongshang University, China
Pseudocode Yes Algorithm 1: E-Graph construction algorithm
Open Source Code Yes We provide the original dataset with constructed E-Graphs, code of ECHO-GL, and implementation details in our Git Hub repository3. 3https://github.com/pupu0302/ECHOGL
Open Datasets Yes We conduct extensive experiments on two real-world datasets, i.e., Qin s (Qin and Yang 2019) and MAEC (Li et al. 2020a) datasets, which contain both the text transcripts and audio records of earnings calls from S&P 500 and S&P 1500 companies in U.S. stock exchanges, respectively. We collect dividend-adjusted closing prices from Yahoo Finance2. Following previous studies (Qin and Yang 2019; Yang et al. 2020), we split the datasets into mutually exclusive training/validation/testing sets in the ratio of 7:1:2 in chronological order.
Dataset Splits Yes Following previous studies (Qin and Yang 2019; Yang et al. 2020), we split the datasets into mutually exclusive training/validation/testing sets in the ratio of 7:1:2 in chronological order.
Hardware Specification No The paper mentions running experiments but does not provide specific details on the hardware used, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions various models and techniques like Fin BERT, LSTM, and LDA but does not specify software dependencies with version numbers (e.g., Python, PyTorch versions).
Experiment Setup No The paper describes the model and evaluation but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) in the main text, instead referring to implementation details in their GitHub repository.