Learning Latent Forests for Medical Relation Extraction
Authors: Zhijiang Guo, Guoshun Nan, Wei LU, Shay B. Cohen
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive results on four datasets show that our model is able to significantly outperform state-of-the-art systems without relying on any direct tree supervision or pre-training. |
| Researcher Affiliation | Academia | 1Stat NLP Research Group, Singapore University of Technology and Design 2ILCC, School of Informatics, University of Edinburgh |
| Pseudocode | No | The paper describes the model components and their mathematical formulations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release our code at https://github.com/ Cartus/Latent-Forests. |
| Open Datasets | Yes | We evaluate our LF-GCN model with four datasets on two tasks... For sentence-level relation extraction, we follow the experimental settings by Lifeng et al. [2020] on Bio Creative Vi CPR (CPR) [Krallinger et al., 2017] and Phenotype-Gene relation (PGR) [Sousa et al., 2019])... For cross-sentence n-ary relation extraction, we use two datasets generated by Peng et al. [2017]... We also use Sem Eval-2010 Task 8 [Hendrickx et al., 2009] dataset... |
| Dataset Splits | Yes | The CPR dataset contains the relations between chemical components and human proteins. It has 16,107 training, 10,030 development and 14,269 testing instances... PGR introduces the relations between human phenotypes with human genes, and it contains 11,780 training instances and 219 test instances... |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions several tools and models (e.g., Stanford Core NLP, GloVe embeddings, Bi LSTM, GCNs) but does not provide specific version numbers for the software dependencies used in their implementation or experiments. |
| Experiment Setup | Yes | For the cross-sentence n-ary relation extraction task, we use the same data splits as Song et al. [2018], stochastic gradient descent optimizer with a 0.9 decay rate, and 300-dimensional GloVe. The hidden size of both Bi LSTM and GCNs are set as 300. |