Disentangled Graph Convolutional Networks

Authors: Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, Wenwu Zhu

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically assess the efficacy of Disen GCN on several node-related tasks, and analyze its behavior on synthetic graphs to gain further insight.
Researcher Affiliation Academia 1Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China.
Pseudocode Yes Algorithm 1 The proposed Disen Conv layer, with K channels.
Open Source Code No The paper does not provide a direct link to a code repository or explicitly state that the source code is released.
Open Datasets Yes We conduct our experiments on six real-world graphs, whose statistics are listed in Table 1. Citeseer, Cora, and Pubmed (Sen et al., 2008) are for semi-supervised node classification. ... Blog Catalog (Tang & Liu, 2009), PPI (Breitkreutz et al., 2008; Grover & Leskovec, 2016), POS (Grover & Leskovec, 2016) are for multi-label node classification.
Dataset Splits Yes The rest of the nodes are split equally to form a validation set and a test set. We follow the experiment protocol established by the previous works (Yang et al., 2016; Kipf & Welling, 2017; Veliˇckovi c et al., 2018) strictly, and use the same dataset splits as them.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software like Hyperopt and Adam for optimization and hyperparameter tuning but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Hyper-parameters Let d be the output dimension of a graph neural network s first layer. In the semi-supervised classification tasks, we follow GAT and use d = 64. In the multi-label classification tasks, we follow node2vec and use d = 128... We set T = 7. We set τ = 1... Specifically, we run hyperopt for 200 trials for each setting, with the hyper-parameter search space specified as follows: the learning rate loguniform[e 8, 1], the l2 regularization term loguniform[e 10, 1], dropout rate {0.05, 0.10, . . . , 0.95}, the number of layers L {1, 2, . . . , 6}, the number of channels used by the first layer K(1) {4, 8, . . . , 32}, and K(l+1) K(l) = K {0, 2, . . . , 8}.