Semi-Data-Driven Network Coarsening
Authors: Li Gao, Jia Wu, Hong Yang, Zhi Qiao, Chuan Zhou, Yue Hu
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and real-world data sets demonstrate the quality and effectiveness of the proposed method. |
| Researcher Affiliation | Collaboration | Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China Quantum Computation & Intelligent Systems Centre, University of Technology Sydney, Australia ]MathWorks, Beijing, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China |
| Pseudocode | Yes | Algorithm 1: Network Node Label Distribution Learning Algorithm 2: The Semi-Net Coarsen(G, C, ) algorithm |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | For synthetic data, we consider the Kronecker graph model [Leskovec et al., 2010]... For real-world data, we collect Twitter data1 [Zhang et al., 2013]. 1http://aminer.org/billboard/Influencelocality |
| Dataset Splits | No | The paper tunes parameters on a 'sub-graph with 1,000 nodes and 2,000 cascade data for each dataset' but does not specify explicit train/validation/test dataset splits with percentages or counts for the main datasets used in the experiments. |
| Hardware Specification | Yes | All experiments are conducted on a Linux system with 6 cores 1.4GHZ AMD CPUs and 32GB memory. |
| Software Dependencies | No | The paper mentions adapting 'Accelerated Proximal Gradient (APG)' but does not list specific software libraries or their version numbers used in the implementation. |
| Experiment Setup | Yes | The parameter λ is searched from λ 2 {0.1, 1, 10, 100}, λy is searched from λy 2 {0.01, 0.1, 1}, and is selected from 2 {0.01, 0.1, 1}. The parameters are tuned by the smallest E(S) on a sub-graph with 1,000 nodes and 2,000 cascade data for each dataset when = 0.5. Table 2 reports the results. In experiments, we empirically set γ = 0.4, = 0.001, and Lf = 10 6NNc |