The Value of Paraphrase for Knowledge Base Predicates
Authors: Bingcong Xue, Sen Hu, Lei Zou, Jiashu Cheng9346-9353
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted and the results prove the value of such good paraphrase dictionaries for natural language processing tasks. |
| Researcher Affiliation | Academia | Bingcong Xue,1 Sen Hu,1 Lei Zou,1,2 Jiashu Cheng3 1Peking University, China; 2Beijing Institute of Big Data Research, China; 3Culver Academies, USA {xuebingcong, husen, zoulei}@pku.edu.cn, jiashu.cheng@culver.org |
| Pseudocode | No | The paper describes its methods in prose and through diagrams but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | We release our dataset on github for further research4. https://github.com/pkumod/Paraphrase |
| Open Datasets | Yes | We evaluate our dictionary on QALD (Question Answering over Linked Data), a series of open-domain question answering campaigns mainly based on DBpedia. ... We choose QALD6-QALD8 (Unger, Ngomo, and Cabrio 2016; Usbeck et al. 2017; 2018) to conduct experiments. |
| Dataset Splits | No | We choose QALD6-QALD8 ... The question numbers of these datasets can be found in Table 4. ... We merge all the QALD datasets to form into a large one, composed of 737 distinct questions, from which we randomly choose 67 tuples for testing and others for training. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only mentions 'After training for a whole day' without further specification. |
| Software Dependencies | No | The paper mentions using 'Stanford Parser', 'g Answer', and 'Pointer-generator model' but does not provide specific version numbers for these or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the overall process and model components but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or other system-level training settings. |