Retaining Data from Streams of Social Platforms with Minimal Regret

Authors: Nguyen Thanh Tam, Matthias Weidlich, Duong Chi Thang, Hongzhi Yin, Nguyen Quoc Viet Hung

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on large-scale real-world datasets illustrate the feasibility of our approach in terms of both, runtime and information quality.
Researcher Affiliation Academia 1 Ecole Polytechnique F ed erale de Lausanne, 2 Humboldt-Universit at zu Berlin, 3 The University of Queensland, 4 Griffith University
Pseudocode Yes Algorithm 1: A Progressive Retaining Algorithm
Open Source Code No The paper does not provide any statement about releasing the source code for the methodology or a link to a code repository.
Open Datasets No The paper mentions extracting datasets using the Twitter Streaming API ("We extracted datasets using the Twitter Streaming API."), but it does not provide a direct link, DOI, specific repository name, or a formal citation to make these collected datasets publicly available.
Dataset Splits No The paper describes how portions of the data were used for evaluation (e.g., "select 100K items E from the original datasets and construct a set of retained items S as the k = 1% oldest items in E. We stream E and learn model parameters online"), but it does not specify explicit training, validation, and test dataset splits with percentages or sample counts.
Hardware Specification Yes All results have been obtained on an Intel i7 3.8GHz system (4 cores, 16GB RAM).
Software Dependencies No The paper mentions using "existing frameworks [Zhuang et al., 2016]" for social features but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow, specific libraries or solvers with versions) used for their implementation.
Experiment Setup Yes Following [Hoffman et al., 2013], we vary the forget rate in (0.5, 1], choose a stable window size = 10 and report average values.