Diverse Message Passing for Attribute with Heterophily

Authors: Liang Yang, Mengzhe Li, Liyang Liu, bingxin niu, Chuan Wang, Xiaochun Cao, Yuanfang Guo

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on various real networks demonstrate the superiority of our DMP on handling the networks with heterophily and alleviating the over-smoothing issue, compared to the existing state-of-the-arts.
Researcher Affiliation Academia 1School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 2State Key Laboratory of Information Security, IIE, CAS, Beijing, China 3School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen, China 4State Key Laboratory of Software Development Environment, Beihang University, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly provide an open-source code repository link or a clear statement about code availability.
Open Datasets Yes Citation networks: Cora, Citeseer, and Pubmed, which are widely used to evaluate GNNs, are the standard citation network benchmark datasets [33, 34]. Web KB webpage networks: Cornell, Texas, and Wisconsin... Co-occurrence network: Actor network... Wikipedia networks: Chameleon and Squirrel... Besides, three heterogenous information networks (HINs), i.e., DBLP, ACM and IMDB, are also employed [37].
Dataset Splits Yes For all the datasets, nodes in each class are randomly split into three groups, 48% for training, 32% for validation, and 20% for testing, as mentioned in [8].
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'Adam [38] is adopted as the optimizer for all the models' but does not specify version numbers for any software dependencies.
Experiment Setup Yes The hyper-parameters, including weight decay, dropout, initial learning rate and patience for learning rate decay, are tuned by searching on the validation set. Adam [38] is adopted as the optimizer for all the models. For fair comparisons to GCN and GAT, standard DMP utilizes a two-layered model.