Hierarchical Event Grounding
Authors: Jiefu Ou, Adithya Pratapa, Rishubh Gupta, Teruko Mitamura
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On an automatically created multilingual dataset from Wikipedia and Wikidata, our experiments demonstrate the effectiveness of the hierarchical loss against retrieve and re-rank baselines. |
| Researcher Affiliation | Academia | Jiefu Ou, Adithya Pratapa, Rishubh Gupta, Teruko Mitamura Language Technologies Institute, Carnegie Mellon University {jiefuo, vpratapa, rishubhg, teruko}@andrew.cmu.edu |
| Pseudocode | No | The paper describes its methodology and algorithms in prose but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Jeffery O/Hierarchical-Event-Grounding |
| Open Datasets | No | To this end, we automatically compile a dataset with event hierarchies of maximum height=3 that consists of 2K events and 937K across 42 languages. The detailed statistics of the train-dev-test split as well as the Wikinews evaluation set (WN) are presented in Table 1. The paper describes creating its own dataset but does not provide a public link, DOI, or specific citation for its compiled version. |
| Dataset Splits | Yes | The detailed statistics of the train-dev-test split as well as the Wikinews evaluation set (WN) are presented in Table 1. Train Dev Test Wikinews # mentions 751550 93047 91928 258 # events 2288 216 273 64 |
| Hardware Specification | No | The paper does not mention any specific hardware components (e.g., GPU models, CPU types, memory, cloud instances) used for running the experiments. |
| Software Dependencies | No | For both the bi-encoder and cross-encoder, we use XLMRo BERTa (Conneau et al. 2020) as the multilingual transformer encoder. While it mentions the transformer model used, it does not specify software dependencies like programming languages or libraries with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | At inference time, we apply a threshold τc to the reranked event candidates and emit those with score τc as final predictions. ... Cross-encoder is also optimized using a BCE loss that maximizes the score of gold events against other retrieved negatives for every mention. ... During training, a batch of Nh parent-child event pairs is independently sampled and the bi-encoder is trained to minimize the in-batch BCE loss. |