Modularity Based Community Detection with Deep Learning
Authors: Liang Yang, Xiaochun Cao, Dongxiao He, Chuan Wang, Xiao Wang, Weixiong Zhang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on synthetic and real networks show that the new methods are effective, outperforming most state-of-the-art methods for community detection. |
| Researcher Affiliation | Academia | Liang Yang,1,2,3 Xiaochun Cao,1 Dongxiao He,3, Chuan Wang,1 Xiao Wang,4 Weixiong Zhang5,6 1State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences 2School of Information Engineering, Tianjin University of Commerce 3School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University 4Department of Computer Science and Technology, Tsinghua University 5College of Math and Computer Science, Institute for Systems Biology, Jianghan University 6Department of Computer Science and Engineering, Washington University in St. Louis |
| Pseudocode | No | No, the paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | No, the paper does not provide any links or statements about the availability of its source code. |
| Open Datasets | Yes | Nine widely-used real networks, listed in Table 1, were used for evaluation. ... Karate [Zachary, 1977], Dolphins [Lusseau and Newman, 2004], Friendship6 [Xie et al., 2013], Friendship7 [Xie et al., 2013], Football [Girvan and Newman, 2002], Polbooks [Newman, 2006], Polblogs [Adamic and Glance, 2005], Cora [Yang et al., 2009]. |
| Dataset Splits | No | No, the paper does not explicitly mention training, validation, and test splits or percentages for datasets. |
| Hardware Specification | No | No, the paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | No, the paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | The layer configurations of the deep neural networks for different problems tested are shown in Table 2. ... The networks have at most 3 stacked Auto-Encoder, and the dimension of each latent space is less than that of its input and output spaces. For example, the stacked Auto-Encoder network for the LFR network consists of three Auto-Encoders, where the first is 1,000-512-1,000, the second 512-256-512 and the third 256128-256. All Auto-Encoders were trained separately. ... We set the training batch to the size of the network and ran at most 100,000 iterations. For semi-DNR, we set the balancing parameter λ = 1000. |