Minimally-Supervised Joint Learning of Event Volitionality and Subject Animacy Classification

Authors: Hirokazu Kiyomaru, Sadao Kurohashi10921-10929

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments with crowdsourced gold data in Japanese and English and show that our method effectively learns volitionality and subject animacy without manually labeled data.
Researcher Affiliation Academia Graduate School of Informatics, Kyoto University Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan {kiyomaru, kuro}@nlp.ist.i.kyoto-u.ac.jp
Pseudocode No The paper describes its methods using prose and mathematical equations, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code and crowdsourced gold data are available at https: //github.com/hkiyomaru/volcls.
Open Datasets Yes We used 30M documents in CC-100 as a raw corpus (Conneau et al. 2020; Wenzek et al. 2020).
Dataset Splits Yes Table 3: Statistics of our dataset. The number with + means that the events were randomly sampled from a larger set according to the size of smallest dataset, Dl vol. Split Label Japanese English Dl vol Train Volitional 31,812 47,926 Non-volitional 81,002 40,564 Dev Volitional 149 67 Non-volitional 233 92 Test Volitional 149 68 Non-volitional 233 93
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only mentions using Pytorch for implementation.
Software Dependencies No The paper mentions software like KNP, spacy, BERTBASE, and Pytorch, but does not provide specific version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes α was selected from {0.0, 0.01, 0.1, 1.0} for each of WR, SOC, and ADA. β was selected from {0.0, 0.1, 1.0}. We trained the model for three epochs with a batch size of 256. We used the Adam optimizer (Kingma and Ba 2015) with a learning rate of 3e-5, linear warmup of the learning rate over the first 10% steps, and linear decay of the learning rate.