Multi-hop Reading Comprehension across Documents with Path-based Graph Convolutional Network
Authors: Zeyun Tang, Yongliang Shen, Xinyin Ma, Wei Xu, Jiale Yu, Weiming Lu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on Wiki Hop dataset, and our approach achieves state-of-the-art accuracy against previously published approaches. Especially, our ensemble model surpasses human performance by 4.2%. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Zhejiang University, China |
| Pseudocode | No | The paper describes methods and formulas but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | Yes | We use WIKIHOP [Welbl et al., 2018] to validate the effectiveness of our proposed approach, which is a multi-choice style reading comprehension data set. |
| Dataset Splits | Yes | The dataset contains about 43K/5K/2.5K samples in training, development, and test set respectively. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like 'NLTK', 'Stanford Core NLP', 'GLoVe', 'ELMo', 'Adam', and 'BERT' but does not specify their version numbers, which are crucial for reproducibility. |
| Experiment Setup | Yes | The dimensions of hidden states in Bi LSTM and GCN are set as d = 256, and the number of nodes and the query length is truncated as 600 and 25 respectively. We stack L = 4 layers of the Gated-RGCN blocks. During training, we set the mini-batch size as 16, and use Adam [Kingma and Ba, 2015] with learning rate 0.0002 for learning the parameters. |