Exploring Encoder-Decoder Model for Distant Supervised Relation Extraction
Authors: Sen Su, Ningning Jia, Xiang Cheng, Shuguang Zhu, Ruiping Li
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a popular dataset show that our model achieves significant improvement over state-of-the-art methods. |
| Researcher Affiliation | Academia | Sen Su, Ningning Jia, Xiang Cheng , Shuguang Zhu, Ruiping Li State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China {susen, jianingning, chengxiang, zsg1990ok, liruiping}@bupt.edu.cn |
| Pseudocode | No | The paper describes its methods using mathematical equations and textual explanations, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide any statement regarding the availability of open-source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We evaluate our model on a widely used dataset1 released by (Riedel, Yao and Mc Callum 2010). This dataset was generated by aligning Freebase relations with the New York Times corpus (NYT)... 1http://iesl.cs.umass.edu/riedel/ecml/ |
| Dataset Splits | Yes | sentences of year 2005 and 2006 are used for training and sentences of year 2007 are used for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or processing power used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency versions (e.g., Python 3.x, PyTorch 1.x) for its implementation. |
| Experiment Setup | Yes | Table 1 shows all parameters values in the experiments. Parameter Value: Window size l 3, Word embedding dimension dw 50, Sentence embedding size ds 230, Batch size B 100, Learning rate λ 0.01, Dropout probability p 0.5. We use three-fold validation to tune our model on the training data. |