Adversarial Training for Community Question Answer Selection Based on Multi-Scale Matching

Authors: Xiao Yang, Madian Khabsa, Miaosen Wang, Wei Wang, Ahmed Hassan Awadallah, Daniel Kifer, C. Lee Giles395-402

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method on Sem Eval 2016 and Sem Eval 2017 datasets and achieves state-of-the-art or similar performance.In this section, we evaluate the proposed method on two benchmark datasets: Sem Eval 2016 and Sem Eval 2017. Ablation experiments are conducted on both datasets to demonstrate the effectiveness of the proposed adversarial training strategy and Multi-scale Matching model.
Researcher Affiliation Collaboration Xiao Yang,1 Madian Khabsa,2 Miaosen Wang,3 Wei Wang4 Ahmed Hassan Awadallah,4 Daniel Kifer,1, C. Lee Giles1 1Pennsylvania State University, 2Apple, 3Google, 4Microsoft
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-sourcing the code for the described methodology or links to a code repository.
Open Datasets Yes Sem Eval 2016 (Nakov et al. 2016) and Sem Eval 2017 (Nakov et al. 2017) datasets are used for Task 3, Subtask C (Question-External Comment Similarity) of Sem Eval 2016 and Sem Eval 2017 challenge, respectively.
Dataset Splits No The paper explicitly mentions the test set for both datasets but does not provide specific details for training or validation splits (e.g., percentages or counts of examples for each split), nor does it reference predefined splits with citations for reproducibility across all three divisions.
Hardware Specification No The paper mentions a 'hardware grant from NVIDIA' but does not specify any particular GPU models, CPU types, or other hardware specifications used for running the experiments.
Software Dependencies No The paper mentions using GloVe embeddings and Adam optimization, but it does not specify any software names with version numbers (e.g., Python, TensorFlow, PyTorch, or specific library versions) needed to reproduce the experiment.
Experiment Setup Yes The model weights are optimized using Adam (Kingma and Ba 2014) optimization method. The initial learning rate is 1e 4 and is decayed by 5 for every 10 epochs. We use L2 regularization on model weights with a coefficient of 1e 6 and a drop out rate of 0.2.