FEEL: Featured Event Embedding Learning

Authors: I-Ta Lee, Dan Goldwasser

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated our model over three narrative cloze tasks, and showed that our model is competitive with the most recent state-of-the-art. We also show that our resulting embedding can be used as a strong representation for advanced semantic tasks such as discourse parsing and sentence semantic relatedness. Experiments We train the event embedding model over the New York Times (NYT) section of the English Gigaword (Parker et al. 2011). Our full model (which includes the event token, subject, object, prepositional object, sentiment, and animacy) represents each event with the concatenation of all its property embeddings, which is 1800-dimensional. The FEEL embeddings are evaluated over three intrinsic tasks: (1) Multiple-Choice Narrative Cloze (MCNC), (2) Multiple-Choice Narrative Sequences (MCNS), and (3) Multiple-Choice Narrative Explanation (MCNE); and two extrinsic tasks: (1) Semantic Relatedness on Sentences Involving Compositional Knowledge (SICK), and (2) Implicit Discourse Sense Classification (IDSC).
Researcher Affiliation Academia I-Ta Lee, Dan Goldwasser Purdue University {lee2226, dgoldwas}@purdue.edu
Pseudocode No The paper describes the model architecture and training process in textual descriptions and a block diagram (Figure 1), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about making the source code for their methodology available, nor does it provide a link to a code repository.
Open Datasets Yes We train the event embedding model over the New York Times (NYT) section of the English Gigaword (Parker et al. 2011).
Dataset Splits Yes We replicate the experimental set up described in the previous work (Granroth-Wilding and Clark 2016), splitting the data into training/dev/testing sets accordingly.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions using 'Vader sentiment analyzer from NLTK' and 'Stanford Core NLP' for preprocessing, and 'Adam' for optimization. However, it does not specify version numbers for any of these software components, which is required for reproducibility.
Experiment Setup Yes For FEEL, we use a 300-dimensional space to embed each property. In our experiment, we use the uniform noise distribution over the event vocabulary, and set the window size k = 5 and the negative ratio r = 10. For simplicity, λi and λr are fixed to 1 in this paper. The cross-entropy loss function and Adam (Kingma and Ba 2014) with minibatches are used to optimize the model. The network architecture for SICK task is 'h = vs1 vs2 h = |vs1 vs2| h = h h p = softmax(W h)' and for IDSC, it is 'a two-hidden-layer neural network, where the activation functions are Rectified Linear Unit (Re LU) and the objective function is the cross-entropy loss'.