Corpus Wide Argument Mining—A Working Solution

Authors: Liat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Halfon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, Noam Slonim7683-7691

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the suggested end-to-end framework we trained several neural net variants on the VLD, and assessed their performance on four benchmarks.
Researcher Affiliation Industry IBM Research AI {liate, eyals, lenad, alonhal, benjams, arielge, leshem.choshen, yonatanb, ranita, noams}@il.ibm.com {carlos.alzate, martin.gleize, yhou}@ie.ibm.com
Pseudocode No No pseudocode or explicit algorithm blocks are present. The paper describes processes and architectures in narrative text and a system diagram.
Open Source Code No No explicit statement or link providing access to the authors' open-source code for the methodology described in the paper. The link provided is for a dataset.
Open Datasets Yes In addition to the dataset above, a matching dataset over Wikipedia was constructed. ... which we denote as the Wikipedia Dataset (Wiki) and is available at http://ibm.biz/debater-datasets.
Dataset Splits Yes We selected 192 and 47 motions for our train and development sets respectively, and for each motion retrieved sentences in the corpus using queries similar to those in Levy et al. (2017) (see Section 5).
Hardware Specification No No specific hardware specifications (e.g., GPU/CPU models, memory) are mentioned for running the experiments.
Software Dependencies No No specific software dependencies with version numbers are listed (e.g., Python version, library versions). The paper mentions models like 'logistic regression' and 'neural net' but not their software implementations or versions.
Experiment Setup No The paper describes general aspects of the experimental setup (e.g., loss function, training BERT), but does not provide specific hyperparameters like learning rates, batch sizes, or number of epochs.