Text Assisted Insight Ranking Using Context-Aware Memory Network
Authors: Qi Zeng, Liangchen Luo, Wenhao Huang, Yang Tang427-434
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our approach largely improves the ranking precision as reported in multi evaluation metrics. |
| Researcher Affiliation | Collaboration | 1Stony Brook University 2Peking University 3Shanghai Discovering Investment 1qi.zeng@stonybrook.edu 2{luolc,tangyang ty}@pku.edu.cn 3huangwh@discoveringgroup.com |
| Pseudocode | No | The paper describes the model components and their mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a link to its own open-source code or explicitly state that the code will be released. |
| Open Datasets | Yes | The financial report dataset is built upon the public annual and quarterly reports from United States Securities and Exchange Commission1. 1https://www.sec.gov/edgar/searchedgar/companysearch.html. We also evaluate the effectiveness of our model on SBNation Dataset from (Wiseman, Shieber, and Rush 2017). |
| Dataset Splits | Yes | In the experiment, we randomly split the dataset into training, validation, and test sets consisting of 60%, 20%, and 20% summaries, respectively. |
| Hardware Specification | Yes | The parameters are updated by Adam algorithm (Kingma and Ba 2014) on a single 1080 Ti GPU and initialized by sampling from the uniform distribution ([ 0.1, 0.1]). |
| Software Dependencies | No | The paper mentions 'Adam algorithm' but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Based on the performance on the validation set, we set the embedding size to 64 for the baseline methods and the proposed model. The initial learning rate is 0.0003. The model is trained in minibatches with a batch size of 1. |