Constructing Narrative Event Evolutionary Graph for Script Event Prediction

Authors: Zhongyang Li, Xiao Ding, Ting Liu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on widely used New York Times corpus demonstrate that our model significantly outperforms state-of-the-art baseline methods, by using standard multiple choice narrative cloze evaluation.
Researcher Affiliation Academia Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology {zyli, xding, tliu}@ir.hit.edu.cn
Pseudocode No The paper provides mathematical equations for the model but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The data and code are released at https://github.com/eecrazy/ Constructing NEEG IJCAI 2018.
Open Datasets Yes Following Granroth-Wilding and Clark [2016], we extract event chains from the New York Times portion of the Gigaword corpus.
Dataset Splits Yes Table 1: Statistics of our datasets. Training: 140,331 #Chains for SGNN, Development: 10,000 #Chains for SGNN, Test: 10,000 #Chains for SGNN.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for the experiments.
Software Dependencies No The C&C tools [Curran et al., 2007] are used for POS tagging and dependency parsing, and Open NLP is used for phrase structure parsing and coreference resolution. No specific version numbers for these tools are provided.
Experiment Setup Yes All the hyperparameters are tuned on the development set, and we use margin loss as the objective function... The margin is the margin loss function parameter, which is set to 0.015. Θ is the set of model parameters. λ is the parameter for L2 regularization, which is set to 0.00001. The learning rate is 0.0001, batch size is 1000, and recurrent times K is 2. We use Deep Walk algorithm [Perozzi et al., 2014] to train embeddings for predicate-GR... and use the Skipgram algorithm [Mikolov et al., 2013] to train embeddings for arguments a0, a1, a2... The embedding dimension d is 128. The model parameters are optimized by the RMSprop algorithm. Early stopping is used to judge when to stop the training loop.