Target-Dependent Churn Classification in Microblogs

Authors: Hadi Amiri, Hal Daume III

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on a Twitter dataset created from a large number of tweets about three telecommunication brands. Experimental results show an average F1 performance of 75% for target-dependent churn classification in microblogs. We performed all the experiments through 10-fold cross validation and used the two-tailed paired t-test ρ < 0.01 for significance testing. Table 6 shows the F1 Performance of target-dependent churn classification based on different indicators evaluated over three brands.
Researcher Affiliation Academia Hadi Amiri and Hal Daume III Computational Linguistics and Information Processing (CLIP) Lab Institute for Advanced Computer Studies University of Maryland {hadi,hal}@umiacs.umd.edu
Pseudocode No The paper describes the methods textually and with mathematical formulations, but it does not include any pseudocode or algorithm blocks.
Open Source Code No The paper mentions the use of the "Vowpal Wabbit classification toolkit" and provides a link to its website, but this is a third-party tool. The paper states, "Our dataset is available at www.umiacs.umd.edu/ hadi/ch Data." This refers to the dataset, not the authors' source code for their methodology.
Open Datasets Yes Our dataset is available at www.umiacs.umd.edu/ hadi/ch Data.
Dataset Splits Yes We performed all the experiments through 10-fold cross validation and used the two-tailed paired t-test ρ < 0.01 for significance testing.
Hardware Specification No The paper does not specify any hardware details such as GPU/CPU models, processors, or memory used to run the experiments. It only mentions the software toolkit used.
Software Dependencies No The paper mentions employing the "Vowpal Wabbit classification toolkit" and states that it used "the Twitter POS tagger developed in (Owoputi et al. 2013) and Stanford parser," but it does not provide specific version numbers for any of these software components.
Experiment Setup Yes We employ Vowpal Wabbit classification toolkit with all parameters set to their default values to perform the classification experiments. (in the experiments, we set k = 3 as it leads to superior performance). We weight the positive examples by the ratio of negative to positive examples to deal with imbalanced classification input.