Inferring Temporal Knowledge for Near-Periodic Recurrent Events
Authors: Dinesh Raghu, Surag Nair, Mausam
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform several experiments to assess TRINE s benefits. For predicting future events, we perform a human evaluation and find that TRINE beats humans convincingly. For KB completion, when instance extractions are available, TRINE fills in missing instances at a Mean Reciprocal Rank [Craswell, 2009] over 0.5 (i.e., one of top two predictions is correct on average). Overall, our contributions are: ... Our experiments show that our system vastly outperforms several natural baselines and also crowd workers. |
| Researcher Affiliation | Collaboration | 1 IIT Delhi, New Delhi, India 2 IBM Research, New Delhi, India 3 Stanford University, Stanford, CA, USA |
| Pseudocode | No | The paper describes the model and its components in detail but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release our code and system output for further research.6 Available at https://github.com/dair-iitd/trine |
| Open Datasets | Yes | We choose Freebase as the knowledge base... We divide these 4,350 recurrent events into two equal sets: train and test. The instances and their occurrence dates in the train set are used as a source of distant supervision to train the instance extractor. A small fraction of the events from the train set are used as dev set for feature selection and hyper parameters tuning. ... We use the New York Times Corpus [Sandhaus, 2008] and a part of Clue Web127 as the input text corpora for the instance extractor. |
| Dataset Splits | No | The paper mentions that 'A small fraction of the events from the train set are used as dev set for feature selection and hyper parameters tuning', but does not provide specific percentages or absolute counts for this 'dev set' (validation split). |
| Hardware Specification | No | The paper mentions 'Microsoft Azure sponsorships, and IIT Delhi HPC facility for computational resources' but does not provide specific hardware details such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions using 'SUTIME' and 'Stanford Core NLP pipeline' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We tune ρ using a dev set and set it to 0.1. |