Independence Promoted Graph Disentangled Networks

Authors: Yanbei Liu, Xiao Wang, Shu Wu, Zhitao Xiao4916-4923

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies on real-world graphs validate our model and demonstrate that our algorithms outperform the state-of-the-arts by a wide margin in different network applications, including semi-supervised graph classification, graph clustering and graph visualization.
Researcher Affiliation Academia Yanbei Liu,1 Xiao Wang,2 Shu Wu,3 Zhitao Xiao1 1School of Life Sciences, Tiangong University 2School of Computer Science, Beijing University of Posts and Telecommunications 3Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
Pseudocode Yes Algorithm 1: The Proposed IPGDN s Layer
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We conduct our experiments on three standard citation network benchmark datasets, whose statistics are listed in Table 1. Cora, Citeseer and Pubmed (Sen et al. 2008) are all for semi-supervised node classification and node clustering.
Dataset Splits Yes We allow for only 20 nodes per class to be used for training. The predictive power of the trained models is evaluated on 1000 test nodes, and we use 500 additional nodes for validation purposes (the same ones as used by (Kipf and Welling 2017; Veliˇckovi c et al. 2017)).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'Adam' for optimization and 'hyperopt' for hyperparameter tuning but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes In our model, we use K = 4, f = 16 for the test. Following (Ma et al. 2019), we set iterations of neighbourhood routing T = 7. Specifically, we run hyperopt for 200 trials for each setting, with the hyper-parameter search space specified as follows: the learning rate loguniform [e 8, 1], the ℓ2 regularization term loguniform[e 10, 1], dropout rate {0.05, 0.10, ..., 0.95}, the number of layers L {1, 2, ..., 6}.