Distant Supervision for Relation Extraction with Sentence-Level Attention and Entity Descriptions
Authors: Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly. |
| Researcher Affiliation | Academia | Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {guoliang.ji, kliu, shizhu.he, jzhao}@nlpr.ia.ac.cn |
| Pseudocode | No | The paper describes its models and formulas but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide a specific link to the source code for its methodology or explicitly state that the code is being released. |
| Open Datasets | Yes | We evaluate our approach using the dataset developed by (Riedel, Yao, and Mc Callum 2010) by aligning Freebase5 relations with the New York Times (NYT) corpus. |
| Dataset Splits | Yes | In our experiments, we tune all of the models using three-fold validation on the training set. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions training word embeddings with 'word2vec' and provides a general link, but it does not specify version numbers for word2vec or any other ancillary software components. |
| Experiment Setup | Yes | The best configurations are: kw = 50, kd = 5, w = 3, n = 200, λ = 0.01, the batch size is 50. Following (Hinton et al. 2012), we set the dropout rate 0.5. |