Learning Co-Substructures by Kernel Dependence Maximization

Authors: Sho Yokoi, Daichi Mochihashi, Ryo Takahashi, Naoaki Okazaki, Kentaro Inui

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report the results of empirical evaluations, in which the proposed method is applied for acquiring and predicting narrative event pairs, an active task in the field of natural language processing.
Researcher Affiliation Academia 1 Tohoku University, Sendai, Japan 2 The Institute of Statistical Mathematics, Tokyo, Japan {yokoi, ryo.t, okazaki, inui}@ecei.tohoku.ac.jp, daichi@ism.ac.jp
Pseudocode No The paper describes algorithms but does not contain any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific links to source code or explicitly state that the code for the methodology is being released.
Open Datasets Yes We used the following two corpora: The Gigaword Corpus5 [Graff and Cieri, 2003]: a large collection of English newswire text data... Andrew Lang Fairy Tale Corpus6: a small collection of children’s stories... 5https://catalog.ldc.upenn.edu/ldc2003t05/ 6http://www.mythfolklore.net/andrewlang/
Dataset Splits No The paper specifies training and test sets but does not explicitly mention or detail a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies Yes Applying Stanford Core NLP Version 3.7.0 [Manning et al., 2014] to raw text from the corpora, we extracted sentence pairs sharing co-referring arguments.
Experiment Setup Yes We ran the MH sampler with β = 108 to draw 7 × 105 and 2 × 105 samples, respectively, for the Gigaword corpus the Fairy Tale corpora.