Effective Deep Memory Networks for Distant Supervised Relation Extraction
Authors: Xiaocheng Feng, Jiang Guo, Bing Qin, Ting Liu, Yongjie Liu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment on real-world datasets shows that our approach performs significantly and consistently better than various baselines. |
| Researcher Affiliation | Academia | Xiaocheng Feng, Jiang Guo, Bing Qin, Ting Liu, Yongjie Liu SCIR, Harbin Institute of Technology, Harbin, China {xcfeng, jguo, qinb, tliu, yongjieliu}@ir.hit.edu.cn |
| Pseudocode | No | The paper describes the model architecture and computational steps using mathematical formulations and textual descriptions, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | We conduct experiments on the basis of NYT10, a dataset developed by [Riedel et al., 2010] and then widely used in distant supervised relation extraction [Hoffmann et al., 2011; Surdeanu et al., 2012]. |
| Dataset Splits | No | This dataset was generated by aligning Freebase relations with the New York Times (NYT) corpus, with sentences from the years of 2005 and 2006 used for training and sentences from 2007 used for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions techniques like Adadelta and Dropout but does not specify any software libraries, frameworks, or their version numbers used for implementation or experimentation. |
| Experiment Setup | Yes | We randomize other parameters with uniform distribution U( -0.01, 0.01 ). We clamp the word embeddings with 50-dimensional vectors, which is the same as [Jiang et al., 2016]. ... We further investigate the effect of the number of hops in word-level memory network. |