Relation Extraction Exploiting Full Dependency Forests
Authors: Lifeng Jin, Linfeng Song, Yue Zhang, Kun Xu, Wei-Yun Ma, Dong Yu8034-8041
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three datasets show that full dependency forests and parser adjustment give significant improvements over carefully designed baselines, showing state-of-the-art or competitive performances on biomedical or newswire benchmarks. |
| Researcher Affiliation | Collaboration | 1The Ohio State University, Columbus, OH, USA 2Tencent AI Lab, Bellevue, WA, USA 3Institute of Advanced Technology, Westlake Institute for Advanced Study 4School of Engineering, Westlake Universtiy 5Academia Sinica |
| Pseudocode | No | The paper describes the model's operations using mathematical equations and textual explanations, but it does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is now available at https://github.com/lifengjin/xxx. |
| Open Datasets | Yes | Bio Creative VI CPR (Krallinger et al., 2017)... Phenotype-Gene relation (PGR) (Sousa, Lamurias, and Couto, 2019)... Sem Eval-2010 task 8 (Hendrickx et al., 2009)... Penn Treebank (PTB) (Marcus and Marcinkiewicz, 1993) |
| Dataset Splits | Yes | Bio Creative VI CPR (...) 1020, 612 and 800 extracted Pub Med abstracts for training, development and testing, respectively. (...) Phenotype-Gene relation (PGR) (...) We separate the first 15% training instances as our development set. (...) Sem Eval-2010 task 8 (...) 10,717 instances (8000 for training) (...) Penn Treebank (PTB) (...) (section 02 21 for training, 22 for development and 23 for testing) |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of Adam optimizer and specifies embedding dimensions and types (Bio ASQ, Glove) but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The dimension of hidden vectors in Bi-LSTM is set to 200, and the number of filters is also set to 200. We use Adam (Kingma and Ba, 2014), with a learning rate of 0.001, as the optimizer. For baseline, GRN step T is set to 2. |