Segmentation of Tweets with URLs and its Applications to Sentiment Analysis
Authors: Abdullah Aljebreen, Weiyi Meng, Eduard Dragut12480-12488
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present an extensive empirical evaluation of our approach in this section to show (i) the effectiveness of our segmentation algorithm and (ii) its beneļ¬t to sentiment analysis on tweets. |
| Researcher Affiliation | Academia | 1 Temple University 2 Binghamton University |
| Pseudocode | Yes | Algorithm 1: The pseudo code of the main function of our algorithm. |
| Open Source Code | No | The paper mentions using a third-party library 'Twitter4J1' and provides its URL, but it does not provide an explicit statement or link to the open-source code for their own developed methodology. |
| Open Datasets | Yes | We run our SA experiments on two datasets: ... (2) Sem Eval is a collection of 60k labeled tweets for the Sem Eval tasks (2013-2017) (Rosenthal, Farra, and Nakov 2017). |
| Dataset Splits | No | The paper describes sampling methods (simple random and stratified random sampling) to approximate accuracy via manual inspection of 1,000 tweets, but it does not specify traditional training/validation/test dataset splits used for model development or hyperparameter tuning. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU, CPU models, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Twitter4J1' and 'Stanford Core NLP, BERT' tools, but it does not provide specific version numbers for these or other software dependencies used in their implementation. |
| Experiment Setup | No | The paper describes its greedy algorithm and some general steps (e.g., 'similarity threshold'), but it does not provide specific numerical hyperparameters (like learning rates, batch sizes, epochs) or detailed system-level training configurations for their experiments. |