Short Text Representation for Detecting Churn in Microblogs

Authors: Hadi Amiri, Hal Daume III

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Twitter data about three telco brands show the utility of our approach for this task.
Researcher Affiliation Academia Hadi Amiri and Hal Daum e III Computational Linguistics and Information Processing (CLIP) Lab Institute for Advanced Computer Studies University of Maryland {hadi,hal}@umiacs.umd.edu
Pseudocode No The paper provides mathematical equations for the RNN model but does not include any pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes We utilize churn data provided by (Amiri and Daume III 2015)3. The data was collected from twitter for three telecommunication brands: Verizon, T-Mobile, and AT&T. Footnote 3: www.umiacs.umd.edu/ hadi/ch Data/
Dataset Splits Yes We performed all the experiments through 10-fold cross validation and used the twotailed paired t-test ρ < 0.01 for significance testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using the "Vowpal Wabbit classification toolkit" but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes We use our development dataset to learn our RNN model for tweet representation. For this, we set the size of hidden layer to m = 128 in the experiments. We employ Vowpal Wabbit classification toolkit4 with all parameters set to their default values to perform the classification experiments.