Inferring Implicit Event Locations from Context with Distributional Similarities

Authors: Jin-Woo Chung, Wonsuk Yang, Jinseon You, Jong C. Park

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our system shows good performance of a 0.58 F1-score, where state-of-the-art classifiers for intra-sentential spatiotemporal relations achieve around 0.60 F1-scores. We also evaluate our methods on the annotated corpus, achieving an F1-score of 0.62 for all locations and 0.53 for implicit locations only.
Researcher Affiliation Academia School of Computing, KAIST, Republic of Korea {jwchung, derrick0511, jsyou, park}@nlp.kaist.ac.kr
Pseudocode No The paper describes the system and methods in narrative text but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions external libraries and models (gensim, word2vec) and provides their URLs, but there is no explicit statement about making the authors' own implementation code open-source or available.
Open Datasets Yes For experiments and evaluation, we use the corpus presented in Chung et al. (2015), which, to the best of our knowledge, is the only work that provides manual annotations of event-location relations on a document level.
Dataset Splits No The paper uses a corpus for experiments and evaluation, but it does not explicitly provide training/test/validation dataset splits (e.g., percentages, sample counts, or predefined split citations) needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or cloud instance types) used to run its experiments.
Software Dependencies No The paper mentions using the 'gensim library' and 'word2vec model' but does not specify their version numbers, which is required for reproducible software dependencies.
Experiment Setup No The paper describes configurations for distributional similarities (e.g., event-event, event-location) and coarse-grained linking methods, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for a model training setup.