Learning Semantic Script Knowledge with Event Embeddings
Authors: Ashutosh Modi; Ivan Titov
ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on crowdsourced data collected for script induction by Regneri et al. (2010)... In our experiments, we compared our event embedding model (EE) against three baseline systems (BL , MSA) and BSMSA is the system of Regneri et al. (2010). BS is a a hierarchical Bayesian system of Frermann et al. (2014). The results are presented in Table 1. |
| Researcher Affiliation | Academia | Ashutosh Modi ASHUTOSH@COLI.UNI-SB.DE Saarland University, Saarbr ucken, Germany Ivan Titov TITOV@UVA.NL University of Amsterdam, Amsterdam, the Netherlands |
| Pseudocode | Yes | Algorithm 1 Learning Algorithm |
| Open Source Code | No | No explicit statement or link to the open-source code for the described methodology is provided. |
| Open Datasets | Yes | We evaluate our approach on crowdsourced data collected for script induction by Regneri et al. (2010), though, in principle, the method is applicable in arguably more general setting of Chambers & Jurafsky (2008). |
| Dataset Splits | Yes | We used 4 held-out scenarios to choose model parameters, no scenario-specific tuning was performed, and the 10 test scripts were not used to perform model selection. |
| Hardware Specification | No | No specific hardware specifications are mentioned in the paper. |
| Software Dependencies | No | We initialize word representations using the SENNA embeddings (Collobert et al., 2011). |
| Experiment Setup | Yes | We use an online ranking algorithm based on the Perceptron Rank (PRank, (Crammer & Singer, 2001)), or, more accurately, its large-margin extension. One crucial difference though is that the error is computed not only with respect to w but also propagated back through the structure of the neural network. Additionally, we use a Gaussian prior on weights, regularizing both the embedding parameters and the vector w. We initialize word representations using the SENNA embeddings (Collobert et al., 2011). We used 4 held-out scenarios to choose model parameters, no scenario-specific tuning was performed, and the 10 test scripts were not used to perform model selection. |