Lightweight Label Propagation for Large-Scale Network Data
Authors: De-Ming Liang, Yu-Feng Li
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We carry out three tasks to evaluate SLP. The first task is large-scale network analysis. In the second one, we compare SLP to several state-of-the-art algorithms on a categorization dataset. Finally, we show the influence of the density of graph on the performance of SLP. |
| Researcher Affiliation | Academia | De-Ming Liang and Yu-Feng Li National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China {liangdm, liyf}@lamda.nju.edu.cn, |
| Pseudocode | Yes | Algorithm 1 SLP Framework |
| Open Source Code | No | No explicit statement or link indicating that the authors' own source code for SLP is publicly available. |
| Open Datasets | Yes | We collect three large-scale network datasets and perform label propagation on them. These graphs [Yang and Leskovec, 2015; Yin et al., 2017] include graphs from Wikipedia1, Livejournal2 and Orkut3. |
| Dataset Splits | No | No explicit train/validation/test dataset splits were provided. It mentions "0.1%, 0.2%, 0.4%, 0.8%, 1.6%, 3.2% of instances are randomly selected as labeled data" for experiments, but this is about the proportion of labeled data, not a general data split for model development and evaluation. |
| Hardware Specification | Yes | We run these evaluations on a PC with 3.2GHz AMD Ryzen 1400 CPU and 16GB RAM. |
| Software Dependencies | No | No specific software dependencies with version numbers were provided. |
| Experiment Setup | Yes | We use default hyper-parameters for SLP. The step size η is set to 1/10^t where t is the next step to take and epoch T is set to 6. |