Generating Event Causality Hypotheses through Semantic Relations

Authors: Chikara Hashimoto, Kentaro Torisawa, Julien Kloetzer, Jong-Hoon Oh

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that, from 2.4 million event causalities extracted from the web, our method generated more than 300,000 hypotheses, which were not in the input, with 70% precision.
Researcher Affiliation Academia National Institute of Information and Communications Technology, Kyoto, 619-0289, Japan { ch, torisawa, julien, rovellia}@nict.go.jp
Pseudocode No The paper describes its method in prose, but does not include any explicitly labeled pseudocode blocks or algorithms in a structured, code-like format.
Open Source Code No The paper states: 'We are planning to release generated event causality hypotheses to the public in the near future.' This indicates a future release, not a current availability of the source code for the methodology.
Open Datasets Yes First, we obtained 2,018,170,662 noun pairs with positive PMI values from the word co-occurrence frequency database (Section 2.1). Then we randomly sampled two million of them. [...] https://alaginrc.nict.go.jp/resources/nict-resource/li-info/lilist.html, ID: A-5
Dataset Splits No The paper mentions using 'labeled data consist of 147,519 examples (15,195 are positive)' for training the HYPOCLASSIFIER, but it does not specify a distinct validation split or any other train/validation/test splits for their overall experimental setup or evaluation of generated hypotheses.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, number of machines) used to conduct the experiments.
Software Dependencies No The paper mentions 'SVM-Light with polynomial kernel d = 2 (svmlight.joachims.org)' and 'J.Dep P (Yoshinaga and Kitsuregawa 2009)' but does not provide specific version numbers for these software components, which is required for reproducibility.
Experiment Setup No The paper states that the HYPOCLASSIFIER is 'trained by SVM-Light with polynomial kernel d = 2', but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, number of epochs, optimizers, or other specific hyperparameter values for training any models.