Timeline Summarization from Social Media with Life Cycle Models
Authors: Yi Chang, Jiliang Tang, Dawei Yin, Makoto Yamada, Yan Liu
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on social media datasets demonstrate the effectiveness of the proposed framework. |
| Researcher Affiliation | Collaboration | Yi Chang,] Jiliang Tang,] Dawei Yin,] Yahoo Inc., {yichang,jlt,daweiy}@yahoo-inc.com Institute for Chemical Research, Kyoto University, myamada@kuicr.kyoto-u.ac.jp Department of Computer Science, University of Southern California, yanliu.cs@usc.edu |
| Pseudocode | Yes | Algorithm 1 Gibbs Sampler of the Nonparametric Model |
| Open Source Code | No | The paper does not provide an explicit statement or link to the open-source code for the methodology described. |
| Open Datasets | No | Since there are no benchmark datasets for the studied task, we manually label 4 social media datasets for evaluation. We collect 684.9k social media posts about Andy Murray, 20.8k posts about David Ferrer, 72.9k posts about Maria Sharapova, and 336.9k posts about Roger Federer from June 22 to August 7, 2012... This indicates custom datasets that are not stated to be publicly available. |
| Dataset Splits | Yes | For these supervised learning approaches, we train learning-to-rank model via cross-validation, i.e. training on 3 labeled datasets and testing on the remaining dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a 'Gradient Boosted Decision Tree (GBDT) algorithm [Friedman, 2001]' and an 'existing API1' for language detection (https://code.google.com/p/language-detection/) but does not specify version numbers for these software components or any other libraries. |
| Experiment Setup | No | The paper mentions 'standard preprocessing steps' and using 'Gradient Boosted Decision Tree (GBDT) algorithm' but does not specify concrete hyperparameter values, training configurations, or other system-level settings for the experiments. |