Ranking Tweets by Labeled and Collaboratively Selected Pairs with Transitive Closure

Authors: Shenghua Liu, Xueqi Cheng, Fangtao Li

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on TREC Microblogging corpus. The results demonstrate that our proposed method achieves significant improvement, compared to several state-of-the-art models.
Researcher Affiliation Collaboration Shenghua Liu and Xueqi Cheng Institute of Computing Technology, Chinese Academy of Sciences No.6 Kexueyuan South Road, Haidian District Beijing, China 100190 Fangtao Li Google Inc. 1600 Amphitheatre Parkway Mountain View, CA 94043
Pseudocode No The paper describes the model learning process in text, detailing 'the optimization step and the selection step with transitive closure', but it does not provide formal pseudocode or an algorithm block.
Open Source Code No The paper does not provide any statement about releasing its own source code, nor does it include a link to a code repository for the methodology described.
Open Datasets Yes We use TREC Microblogging corpus from the 2012 release. 1http://trec.nist.gov/data/tweets/
Dataset Splits No The paper specifies training and testing query sets ('the first 49 queries as training queries, and the rest of 60 as testing queries') and describes how labeled and unlabeled data are generated from training queries, including sampling methods and percentages (e.g., 'We sample 50% of the labeled data from 49 training queries'). However, it does not explicitly mention a 'validation' dataset split.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, nor does it mention specific GPU or CPU models, or cloud computing instances.
Software Dependencies No The paper mentions 'Rank Lib3' as an implementation for baselines but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes Experiments with different confidence thresholds δ are shown in Table 2. From the results, we can see that in the setting of confidence δ = 0.5, CSR-TC performs better than others. Our CSR-TC is trained with confidence threshold δ = 0.7.