Attributed Graph Clustering via Adaptive Graph Convolution
Authors: Xiaotong Zhang, Han Liu, Qimai Li, Xiao-Ming Wu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods. |
| Researcher Affiliation | Academia | Department of Computing, The Hong Kong Polytechnic University zxt.dut@hotmail.com, liu.han.dut@gmail.com, {csqmli,csxmwu}@comp.polyu.edu.hk |
| Pseudocode | Yes | The overall procedure of AGC is shown in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate our method AGC on four benchmark attributed networks. Cora, Citeseer and Pubmed [Kipf and Welling, 2016] are citation networks... Wiki [Yang et al., 2015] is a webpage network... |
| Dataset Splits | No | The paper mentions using an internal criterion for adaptively selecting k ('find the first local minimum of intra(C)'), which functions as a validation step. However, it does not provide explicit numerical train/validation/test splits for the datasets. |
| Hardware Specification | Yes | The running time of AGC (in Python, with Intel Core i7-8700 CPU and 16GB memory) on Cora, Citeseer, Pubmed and Wiki is 5.78, 62.06, 584.87, and 10.75 seconds respectively. |
| Software Dependencies | No | The paper mentions 'in Python' for the running time, but does not specify any software libraries, frameworks, or their version numbers. |
| Experiment Setup | Yes | For AGC, we set max iter = 60. For Deep Walk, the number of random walks is 10, the number of latent dimensions for each node is 128, and the path length of each random walk is 80. For GAE and VGAE, we construct encoders with a 32-neuron hidden layer and a 16-neuron embedding layer, and train the encoders for 200 iterations using the Adam optimizer with learning rate 0.01. |