End-to-End Quantum-like Language Models with Application to Question Answering

Authors: Peng Zhang, Jiabin Niu, Zhan Su, Benyou Wang, Liqun Ma, Dawei Song

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the TREC-QA and WIKIQA datasets have verified the effectiveness of our proposed models.
Researcher Affiliation Collaboration 1. School of Computer Science and Technology, Tianjin University, Tianjin, China 2. Department of Social Network Operation, Social Network Group, Tencent, Shenzhen, China 3. School of Electrical and Information Engineering, Tianjin University, Tianjin, China 4. Computing and Communications Department, The Open University,United Kingdom
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its source code. It mentions 'English Wikimedia1' with a link for word embeddings, which is a third-party dataset, not the authors' own implementation code.
Open Datasets Yes Extensive experiments are conducted on the TREC-QA and Wiki QA datasets. Table 1: Statistics of TREC-QA and Wiki QA TRAIN DEV TEST.
Dataset Splits Yes Table 1: Statistics of TREC-QA and Wiki QA TRAIN DEV TEST TRAIN DEV TEST #Question 1229 82 100 873 126 243 #Pairs 53417 1148 1517 8672 1130 2351 %Correct 12.0 19.3 18.7 12.0 12.4 12.5
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions 'word2vec (Mikolov et al. 2013)' for training word embeddings, but it does not specify version numbers for word2vec or any other relevant software libraries or dependencies.
Experiment Setup Yes Table 2: The Hyper Parameters of NNQLM Method TREC-QA WIKIQA NNQLM-I NNQLM-II NNQLM-I NNQLM-II learning rate 0.01 0.01 0.08 0.02 batchsize 100 100 100 140 filter number / 65 / 150 filter size / 40 / 40