LSDH: A Hashing Approach for Large-Scale Link Prediction in Microblogs
Authors: Dawei Liu, Yuanzhuo Wang, Yantao Jia, Jingyuan Li, Zhihua Yu
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments were applied over a Twitter dataset and the preliminary results testified the effectiveness of LSDH in predicting the likelihood of future associations between people. |
| Researcher Affiliation | Academia | Institute of Network Technology, Institute of Computing Technology(Yantai), CAS, Beijing, P.R. China Institute of Computing Technology, CAS, Beijing, P.R. China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | No | We crawled the Twitter linkage dataset through the API service provided by the official Twitter website. While the data source is Twitter, the specific crawled dataset with '12,000 users and 20,0000 tweets' is not made publicly available with concrete access information (link, DOI, or repository). |
| Dataset Splits | No | The paper describes using a Twitter dataset but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper discusses the LSH scheme parameters (W, k, L) and their optimization, but it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or training configurations. |