Quality Matters: Assessing cQA Pair Quality via Transductive Multi-View Learning

Authors: Xiaochi Wei, Heyan Huang, Liqiang Nie, Fuli Feng, Richang Hong, Tat-Seng Chua

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets have well-validated the proposed model.
Researcher Affiliation Academia 1 Beijing ER Center of HLIPCA, School of Computer, Beijing Institute of Technology 2 School of Computer Science, Shandong University 3 School of Computing, National University of Singapore 4 School of Computer and Information, Hefei University of Technology
Pseudocode No The paper describes the model mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Additionally, we have released our codes and data to facilitate follow-on researchers1. Footnote 1: http://datapublication.wixsite.com/tmvl.
Open Datasets No We crawled data from two compact subsites of Stack Exchange, i.e., English and Game. [...] In total, we obtained 4, 704 and 3, 043 labeled c QA pairs from these two datasets, respectively. The paper describes the creation of its own dataset but does not provide a direct link, DOI, or formal citation for its public availability. The general statement about releasing 'codes and data' to a Wix site is not specific enough for dataset reproducibility under the given criteria.
Dataset Splits Yes In auto-evaluation, we utilized the automatically generated labels to evaluate the performance. We randomly selected 20% c QA pairs as unlabeled samples as well as testing samples. [...] In manual evaluation, the aforementioned 4, 704 and 3, 043 automatically labeled c QA pairs were all treated as labeled data, and we randomly selected 1, 000 c QA pairs from each subsite as unlabeled ones. [...] From these unlabeled c QA pairs, we further randomly selected 100 c QA pairs and invited three volunteers to annotate their quality scores from 1 (poor) to 5 (excellent).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software libraries or their version numbers used in the experiments (e.g., Python, PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup No The paper describes feature extraction and evaluation metrics, but it does not specify hyperparameters (e.g., learning rate, batch size, number of epochs) or other system-level training settings required to reproduce the experiments.